首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

Traditional claims-reserving techniques are based on so-called run-off triangles containing aggregate claim figures. Such a triangle provides a summary of an underlying data set with individual claim figures. This contribution explores the interpretation of the available individual data in the framework of longitudinal data analysis. Making use of the theory of linear mixed models, a flexible model for loss reserving is built. Whereas traditional claims-reserving techniques don’t lead directly to predictions for individual claims, the mixed model enables such predictions on a sound statistical basis with, for example, confidence regions. Both a likelihood-based as well as a Bayesian approach are considered. In the frequentist approach, expressions for the mean squared error of prediction of an individual claim reserve, origin year reserves, and the total reserve are derived. Using MCMC techniques, the Bayesian approach allows simulation from the complete predictive distribution of the reserves and the calculation of various risk measures. The paper ends with an illustration of the suggested techniques on a data set from practice, consisting of Belgian automotive third-party liability claims. The results for the mixed-model analysis are compared with those obtained from traditional claims-reserving techniques for run-off triangles. For the data under consideration, the lognormal mixed model fits the observed individual data well. It leads to individual predictions comparable to those obtained by applying chain-ladder development factors to individual data. Concerning the predictive power on the aggregate level, the mixed model leads to reasonable predictions and performs comparable to and often better than the stochastic chain ladder for aggregate data.  相似文献   

2.
Abstract

In a non-life insurance business an insurer often needs to build up a reserve to able to meet his or her future obligations arising from incurred but not reported completely claims. To forecast these claims reserves, a simple but generally accepted algorithm is the classical chain-ladder method. Recent research essentially focused on the underlying model for the claims reserves to come to appropriate bounds for the estimates of future claims reserves. Our research concentrates on scenarios with outlying data. On closer examination it is demonstrated that the forecasts for future claims reserves are very dependent on outlying observations. The paper focuses on two approaches to robustify the chain-ladder method: the first method detects and adjusts the outlying values, whereas the second method is based on a robust generalized linear model technique. In this way insurers will be able to find a reserve that is similar to the reserve they would have found if the data contained no outliers. Because the robust method flags the outliers, it is possible to examine these observations for further examination. For obtaining the corresponding standard errors the bootstrapping technique is applied. The robust chain-ladder method is applied to several run-off triangles with and without outliers, showing its excellent performance.  相似文献   

3.
This article proposes a new loss reserving approach, inspired from the collective model of risk theory. According to the collective paradigm, we do not relate payments to specific claims or policies, but we work within a frequency-severity setting, with a number of payments in every cell of the run-off triangle, together with the corresponding paid amounts. Compared to the Tweedie reserving model, which can be seen as a compound sum with Poisson-distributed number of terms and Gamma-distributed summands, we allow here for more general severity distributions, typically mixture models combining a light-tailed component with a heavier-tailed one, including inflation effects. The severity model is fitted to individual observations and not to aggregated data displayed in run-off triangles with a single value in every cell. In that respect, the modeling approach appears to be a powerful alternative to both the crude traditional aggregated approach based on triangles and the extremely detailed individual reserving approach developing each and every claim separately. A case study based on a motor third-party liability insurance portfolio observed over 2004–2014 is used to illustrate the relevance of the proposed approach.  相似文献   

4.
Applications of state space models and the Kalman filter are comparatively underrepresented in stochastic claims reserving. This is usually caused by their high complexity due to matrix-based approaches, which complicate their applications. In order to facilitate the implementation of state space models in practice, we present a state space model for cumulative payments in the framework of a scalar-based approach. In addition to a comprehensive presentation of this scalar state space model, some empirical applications and comparisons with popular stochastic claims reserving methods are performed, which show the strengths of the scalar state space model in practical applications. This model is a robustified extension of the well-known Chain Ladder method under the assumption, that the observations in the upper triangle are based on unobservable states. Using Kalman-filter recursions for prediction, filtering and smoothing of cumulative payments, the entire unobservable lower and upper run-off triangles can be determined. Moreover, the model provides an easy way to find and smooth outliers and to interpolate gaps in the data. Thus, the problem of missing values in the upper triangle is also solved in a natural way.  相似文献   

5.
This paper proposes a Bayesian non-parametric mortality model for a small population, when a benchmark mortality table is also available and serves as part of the prior information. In particular, we extend the Poisson-gamma model of Hardy and Panjer to incorporate correlated and age-specific mortality coefficients. These coefficients, which measure the difference in mortality levels between the small and the benchmark population, follow an age-indexed autoregressive gamma process, and can be stochastically extrapolated to ages where the small population has no historical exposure. Our model substantially improves the computation efficiency of existing two-population Bayesian mortality models by allowing for closed form posterior mean and variance of the future number of deaths, and an efficient sampling algorithm for the entire posterior distribution. We illustrate the proposed model with a life insurance portfolio from a French insurance company.  相似文献   

6.
A model for the statistical analysis of the total amount of insurance paid out on a policy is developed and applied. The model simultaneously deals with the number of claims (zero or more) and the amount of each claim. The number of claims is from a Poisson-based discrete distribution. Individual claim sizes are from a continuous right skewed distribution. The resulting distribution of total claim size is a mixed discrete-continuous model, with positive probability of a zero claim. The means and dispersions of the claim frequency and claim size distribution are modeled in terms of risk factors. The model is applied to a car insurance data set.  相似文献   

7.
The prediction of the outstanding loss liabilities for a non-life run-off portfolio as well as the quantification of the prediction error is one of the most important actuarial tasks in non-life insurance. In this paper we consider this prediction problem in a multivariate context. More precisely, we derive the predictive distribution of the claims reserves simultaneously for several correlated run-off portfolios in the framework of the Chain-ladder claims reserving method for several correlated run-off portfolios.  相似文献   

8.
The vast literature on stochastic loss reserving concentrates on data aggregated in run-off triangles. However, a triangle is a summary of an underlying data-set with the development of individual claims. We refer to this data-set as ‘micro-level’ data. Using the framework of Position Dependent Marked Poisson Processes) and statistical tools for recurrent events, a data-set is analyzed with liability claims from a European insurance company. We use detailed information of the time of occurrence of the claim, the delay between occurrence and reporting to the insurance company, the occurrences of payments and their sizes, and the final settlement. Our specifications are (semi)parametric and our approach is likelihood based. We calibrate our model to historical data and use it to project the future development of open claims. An out-of-sample prediction exercise shows that we obtain detailed and valuable reserve calculations. For the case study developed in this paper, the micro-level model outperforms the results obtained with traditional loss reserving methods for aggregate data.  相似文献   

9.
In this paper, models for claim frequency and average claim size in non-life insurance are considered. Both covariates and spatial random effects are included allowing the modelling of a spatial dependency pattern. We assume a Poisson model for the number of claims, while claim size is modelled using a Gamma distribution. However, in contrast to the usual compound Poisson model, we allow for dependencies between claim size and claim frequency. A fully Bayesian approach is followed, parameters are estimated using Markov Chain Monte Carlo (MCMC). The issue of model comparison is thoroughly addressed. Besides the deviance information criterion and the predictive model choice criterion, we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. We give an application to a comprehensive data set from a German car insurance company. The inclusion of spatial effects significantly improves the models for both claim frequency and claim size, and also leads to more accurate predictions of the total claim sizes. Further, we detect significant dependencies between the number of claims and claim size. Both spatial and number of claims effects are interpreted and quantified from an actuarial point of view.  相似文献   

10.
Accurate prediction of future claims is a fundamentally important problem in insurance. The Bayesian approach is natural in this context, as it provides a complete predictive distribution for future claims. The classical credibility theory provides a simple approximation to the mean of that predictive distribution as a point predictor, but this approach ignores other features of the predictive distribution, such as spread, that would be useful for decision making. In this article, we propose a Dirichlet process mixture of log-normals model and discuss the theoretical properties and computation of the corresponding predictive distribution. Numerical examples demonstrate the benefit of our model compared to some existing insurance loss models, and an R code implementation of the proposed method is also provided.  相似文献   

11.
Abstract

This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three worked illustrative examples, two of which feature actual insurance claims data.  相似文献   

12.
Abstract

The correlation among multiple lines of business plays an important role in quantifying the uncertainty of loss reserves for insurance portfolios. To accommodate correlation, most multivariate loss-reserving methods focus on the pairwise association between corresponding cells in multiple run-off triangles. However, such practice usually relies on the independence assumption across accident years and ignores the calendar year effects that could affect all open claims simultaneously and induce dependencies among loss triangles. To address this issue, we study a Bayesian log-normal model in the prediction of outstanding claims for dependent lines of business. In addition to the pairwise correlation, our method allows for an explicit examination of the correlation due to common calendar year effects. Further, different specifications of the calendar year trend are considered to reflect valuation actuaries’ prior knowledge of claim development. In a case study, we analyze an insurance portfolio of personal and commercial auto lines from a major U.S. property-casualty insurer. It is shown that the incorporation of calendar year effects improves model fit significantly, though it contributes substantively to the predictive variability. The availability of the realizations of predicted claims permits us to perform a retrospective test, which suggests that extra prediction uncertainty is indispensable in modern risk management practices.  相似文献   

13.
Insurers are faced with the challenge of estimating the future reserves needed to handle historic and outstanding claims that are not fully settled. A well-known and widely used technique is the chain-ladder method, which is a deterministic algorithm. To include a stochastic component one may apply generalized linear models to the run-off triangles based on past claims data. Analytical expressions for the standard deviation of the resulting reserve estimates are typically difficult to derive. A popular alternative approach to obtain inference is to use the bootstrap technique. However, the standard procedures are very sensitive to the possible presence of outliers. These atypical observations, deviating from the pattern of the majority of the data, may both inflate or deflate traditional reserve estimates and corresponding inference such as their standard errors. Even when paired with a robust chain-ladder method, classical bootstrap inference may break down. Therefore, we discuss and implement several robust bootstrap procedures in the claims reserving framework and we investigate and compare their performance on both simulated and real data. We also illustrate their use for obtaining the distribution of one year risk measures.  相似文献   

14.

This paper considers the collective risk model for the insurance claims process. We will adopt a Bayesian point of view, where uncertainty concerning the specification of the prior distribution is a common question. The robust Bayesian approach uses a class of prior distributions which model uncertainty about the prior, instead of a single distribution. Relatively little research has dealt with robustness with respect to ratios of posterior expectations as occurs with the Esscher and Variance premium principles. Appropriate techniques are developed in this paper to solve this problem using the k -contamination class in the collective risk model.  相似文献   

15.
This paper is inspired by two papers of Riegel who proposed to consider the paid and incurred loss development of the individual claims and to use a filter in order to separate small and large claims and to construct loss development squares for the paid or incurred small or large claims and for the numbers of large claims. We show that such loss development squares can be constructed from collective models for the accident years. Moreover, under certain assumptions on these collective models, we show that a development pattern exists for each of these loss development squares, which implies that various methods of loss reserving can be used for prediction and that the chain ladder method is a natural method for the prediction of future numbers of large claims.  相似文献   

16.
In this paper we present two different approaches to how one can include diagonal effects in non-life claims reserving based on run-off triangles. Empirical analyses suggest that the approaches in Zehnwirth (2003) and Kuang et al. (2008a, 2008b) do not work well with low-dimensional run-off triangles because estimation uncertainty is too large. To overcome this problem we consider similar models with a smaller number of parameters. These are closely related to the framework considered in Verbeek (1972) and Taylor (1977, 2000); the separation method. We explain that these models can be interpreted as extensions of the multiplicative Poisson models introduced by Hachemeister & Stanard (1975) and Mack (1991).  相似文献   

17.
ABSTRACT

In the context of predicting future claims, a fully Bayesian analysis – one that specifies a statistical model, prior distribution, and updates using Bayes's formula – is often viewed as the gold-standard, while Bühlmann's credibility estimator serves as a simple approximation. But those desirable properties that give the Bayesian solution its elevated status depend critically on the posited model being correctly specified. Here we investigate the asymptotic behavior of Bayesian posterior distributions under a misspecified model, and our conclusion is that misspecification bias generally has damaging effects that can lead to inaccurate inference and prediction. The credibility estimator, on the other hand, is not sensitive at all to model misspecification, giving it an advantage over the Bayesian solution in those practically relevant cases where the model is uncertain. This begs the question: does robustness to model misspecification require that we abandon uncertainty quantification based on a posterior distribution? Our answer to this question is No, and we offer an alternative Gibbs posterior construction. Furthermore, we argue that this Gibbs perspective provides a new characterization of Bühlmann's credibility estimator.  相似文献   

18.
In this article, a new methodology for obtaining a premium based on a broad class of conjugate prior distributions, assuming lognormal claims, is presented. The new class of prior distributions arise in a natural way, using the conditional specification technique introduced by Arnold, Castillo, and Sarabia (1998, 1999) . The new family of prior distributions is very flexible and contains, as particular cases, many other distributions proposed in the literature. Together with its flexibility, the main advantage of this distribution is that, due to its dependence on a large number of hyperparameters, it allows incorporating a wide amount of prior information. Several methods for hyperparameter elicitation are proposed. Finally, some examples with real and simulated data are given.  相似文献   

19.
Abstract

This paper deals with the prediction of the amount of outstanding automobile claims that an insurance company will pay in the near future. We consider various competing models using Bayesian theory and Markov chain Monte Carlo methods. Claim counts are used to add a further hierarchical stage in the model with log-normally distributed claim amounts and its corresponding state space version. This way, we incorporate information from both the outstanding claim amounts and counts data resulting in new model formulations. Implementation details and illustrations with real insurance data are provided.  相似文献   

20.
Insurance claims data usually contain a large number of zeros and exhibits fat-tail behavior. Misestimation of one end of the tail impacts the other end of the tail of the claims distribution and can affect both the adequacy of premiums and needed reserves to hold. In addition, insured policyholders in a portfolio are naturally non-homogeneous. It is an ongoing challenge for actuaries to be able to build a predictive model that will simultaneously capture these peculiar characteristics of claims data and policyholder heterogeneity. Such models can help make improved predictions and thereby ease the decision-making process. This article proposes the use of spliced regression models for fitting insurance loss data. A primary advantage of spliced distributions is their flexibility to accommodate modeling different segments of the claims distribution with different parametric models. The threshold that breaks the segments is assumed to be a parameter, and this presents an additional challenge in the estimation. Our simulation study demonstrates the effectiveness of using multistage optimization for likelihood inference and at the same time the repercussions of model misspecification. For purposes of illustration, we consider three-component spliced regression models: the first component contains zeros, the second component models the middle segment of the loss data, and the third component models the tail segment of the loss data. We calibrate these proposed models and evaluate their performance using a Singapore auto insurance claims dataset. The estimation results show that the spliced regression model performs better than the Tweedie regression model in terms of tail fitting and prediction accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号