首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

2.
ABSTRACT

We propose an asymptotic theory for distribution forecasting from the log-normal chain-ladder model. The theory overcomes the difficulty of convoluting log-normal variables and takes estimation error into account. The results differ from that of the over-dispersed Poisson model and from the chain-ladder-based bootstrap. We embed the log-normal chain-ladder model in a class of infinitely divisible distributions called the generalized log-normal chain-ladder model. The asymptotic theory uses small σ asymptotics where the dimension of the reserving triangle is kept fixed while the standard deviation is assumed to decrease. The resulting asymptotic forecast distributions follow t distributions. The theory is supported by simulations and an empirical application.  相似文献   

3.
Abstract

The conventional approach to evolutionary credibility theory assumes a linear state-space model for the longitudinal claims data so that Kalman filters can be used to estimate the claims’ expected values, which are assumed to form an autoregressive time series. We propose a class of linear mixed models as an alternative to linear state-space models for evolutionary credibility and show that the predictive performance is comparable to that of the Kalman filter when the claims are generated by a linear state-space model. More importantly, this approach can be readily extended to generalized linear mixed models for the longitudinal claims data. We illustrate its applications by addressing the “excess zeros” issue that a substantial fraction of policies does not have claims at various times in the period under consideration.  相似文献   

4.
ABSTRACT

In the context of predicting future claims, a fully Bayesian analysis – one that specifies a statistical model, prior distribution, and updates using Bayes's formula – is often viewed as the gold-standard, while Bühlmann's credibility estimator serves as a simple approximation. But those desirable properties that give the Bayesian solution its elevated status depend critically on the posited model being correctly specified. Here we investigate the asymptotic behavior of Bayesian posterior distributions under a misspecified model, and our conclusion is that misspecification bias generally has damaging effects that can lead to inaccurate inference and prediction. The credibility estimator, on the other hand, is not sensitive at all to model misspecification, giving it an advantage over the Bayesian solution in those practically relevant cases where the model is uncertain. This begs the question: does robustness to model misspecification require that we abandon uncertainty quantification based on a posterior distribution? Our answer to this question is No, and we offer an alternative Gibbs posterior construction. Furthermore, we argue that this Gibbs perspective provides a new characterization of Bühlmann's credibility estimator.  相似文献   

5.
Abstract

As is well known in actuarial practice, excess claims (outliers) have a disturbing effect on the ratemaking process. To obtain better estimators of premiums, which are based on credibility theory, Künsch and Gisler and Reinhard suggested using robust methods. The estimators proposed by these authors are indeed resistant to outliers and serve as an excellent example of how useful robust models can be for insurance pricing. In this article we further refine these procedures by reducing the degree of heuristic arguments they involve. Specifically we develop a class of robust estimators for the credibility premium when claims are approximately gamma-distributed and thoroughly study their robustness-efficiency trade-offs in large and small samples. Under specific datagenerating scenarios, this approach yields quantitative indices of estimators’ strength and weakness, and it allows the actuary (who is typically equipped with information beyond the statistical model) to choose a procedure from a full menu of possibilities. Practical performance of our methods is illustrated under several simulated scenarios and by employing expert judgment.  相似文献   

6.
Abstract

Current formulas in credibility theory often estimate expected claims as a function of the sample mean of the experience claims of a policyholder. An actuary may wish to estimate future claims as a function of some statistic other than the sample arithmetic mean of claims, such as the sample geometric mean. This can be suggested to the actuary through the exercise of regressing claims on the geometric mean of prior claims. It can also be suggested through a particular probabilistic model of claims, such as a model that assumes a lognormal conditional distribution. In the first case, the actuary may lean towards using a linear function of the geometric mean, depending on the results of the data analysis. On the other hand, through a probabilistic model, the actuary may want to use the most accurate estimator of future claims, as measured by squared-error loss. However, this estimator might not be linear.

In this paper, I provide a method for balancing the conflicting goals of linearity and accuracy. The credibility estimator proposed minimizes the expectation of a linear combination of a squared-error term and a second-derivative term. The squared-error term measures the accuracy of the estimator, while the second-derivative term constrains the estimator to be close to linear. I consider only those families of distributions with a one-dimensional sufficient statistic and estimators that are functions of that sufficient statistic or of the sample mean. Claim estimators are evaluated by comparing their conditional mean squared errors. In general, functions of the sufficient statistics prove to be better credibility estimators than functions of the sample mean.  相似文献   

7.
Abstract

The aim of this paper is to analyse two functions that are of general interest in the collective risk theory, namely F, the distribution function of the total amount of claims, and II, the Stop Loss premium. Section 2 presents certain basic formulae. Sections 17-18 present five claim distributions. Corresponding to these claim distributions, the functions F and II were calculated under various assumptions as to the distribution of the number of claims. These calculations were performed on an electronic computer and the numerical method used for this purpose is presented in sections 9, 19 and 20 under the name of the C-method which method has the advantage of furnishing upper and lower limits of the quantities under estimation. The means of these limits, in the following regarded as the “exact” results, are given in Tables 4-20. Sections 11-16 present certain approximation methods. The N-method of section 11 is an Edgeworth expansion, while the G-method given in section 12 is an approximation by a Pearson type III curve. The methods presented in sections 13-16, and denoted AI-A4, are all applications and modifications of the Esscher method. These approximation methods have been applied for the calculation of F and II in the cases mentioned above in which “exact” results were obtained. The results are given in Tables 4-20. The object of this investigation was to obtain information as to the precision of the approximation methods in question, and to compare their relative merits. These results arc discussed in sections 21-24.  相似文献   

8.
《Quantitative Finance》2013,13(3):173-183
Abstract

We review the general class of analytically tractable asset-price models that was introduced by Brigo and Mercurio (2001a Mathematical Finance—Bachelier Congr. 2000 (Springer Finance) ed H Geman, D B Madan, S R Pliska and A C F Vorst (Berlin: Springer) pp 151–74), where the considered asset can be an exchange rate, a stock index or even a forward Libor rate. The class is based on an explicit SDE under a given forward measure and includes models featuring (i) explicit asset-price dynamics, (ii) a virtually unlimited number of parameters and (iii) analytical formulae for European options.

We also review the fundamental case where the asset-price density is given, at every time, by a mixture of log-normal densities with equal means. We then introduce two other cases: the first is still based on log-normal densities, but it allows for different means in the distributions; the second is based on processes of hyperbolic-sine type.

Finally, we test the goodness of calibration to real market data of the considered models, choosing a particularly asymmetric volatility surface. As expected, the model based on hyperbolic-sine density mixtures achieves the lowest calibration error.  相似文献   

9.

Stochastic approximation is a powerful tool for sequential estimation of zero points of a function. This methodology is defined and is shown to be related to a broad class of credibility formulae derived for the Exponential Dispersion Family (EDF). We further consider a Location Dispersion Family (LDF) which is rich enough and for which no simple credibility formula exists. For this case, a Generalized Sequential Credibility Formula is suggested and an optimal stepwise gain sequence is derived.  相似文献   

10.
Credibility theory is a statistical tool to calculate the premium for the next period based on past claims experience and the manual rate. Each contract is characterized by a risk parameter. A phase-type (or PH) random variable, which is defined as the time until absorption in a continuous-time Markov chain, is fully characterized by two sets of parameters from that Markov chain: the initial probability vector and transition intensity matrix. In this article, we identify an interpretable univariate risk parameter from amongst the many candidate parameters, by means of uniformization. The resulting density form is then expressed as an infinite mixture of Erlang distributions. These results are used to obtain a tractable likelihood function by a recursive formula. Then the best estimator for the next premium, i.e. the Bayesian premium, as well as its approximation by the Bühlmann credibility premium are calculated. Finally, actuarial calculations for the Bühlmann and Bayesian premiums are investigated in the context of a gamma prior, and illustrated by simulated data in a series of examples.  相似文献   

11.

This paper derives two-sided bounds for tails of compound negative binomial distributions, both in the exponential and heavy-tailed cases. Two approaches are employed to derive the two-sided bounds in the case of exponential tails. One is the convolution technique, as in Willmot & Lin (1997). The other is based on an identity of compound negative binomial distributions; they can be represented as a compound Poisson distribution with a compound logarithmic distribution as the underlying claims distribution. This connection between the compound negative binomial, Poisson and logarithmic distributions results in two-sided bounds for the tails of the compound negative binomial distribution, which also generalize and improve a result of Willmot & Lin (1997). For the heavy-tailed case, we use the method developed by Cai & Garrido (1999b). In addition, we give two-sided bounds for stop-loss premiums of compound negative binomial distributions. Furthermore, we derive bounds for the stop-loss premiums of general compound distributions among the classes of HNBUE and HNWUE.  相似文献   

12.
In the probabilistic risk aversion approach, risks are presumed as random variables with known probability distributions. However, in some practical cases, for example, due to the absence of historical data, the inherent uncertain characteristic of risks or different subject judgements from the decision-makers, risks may be hard or not appropriate to be estimated with probability distributions. Therefore, the traditional probabilistic risk aversion theory is ineffective. Thus, in order to deal with these cases, we suggest measuring these kinds of risks as fuzzy variables, and accordingly to present an alternative risk aversion approach by employing credibility theory. In the present paper, first, the definition of credibilistic risk premium proposed by Georgescu and Kinnunen [Fuzzy Inf. Eng., 2013, 5, 399–416] is revised by taking the initial wealth into consideration, and then a general method to compute the credibilistic risk premium is provided. Secondly, regarding the risks represented with the commonly used LR fuzzy intervals, a simple calculation formula of the local credibilistic risk premium is put forward. Finally, in a global sense, several equivalent propositions for comparative risk aversion under the credibility measurement are provided. Illustrated examples are presented to show the applicability of the theoretical findings.  相似文献   

13.
Abstract

We examine two performance measures advocated for asymmetric return distributions: the Sortino ratio—originally introduced by Sortino and Price (Sortino F and Price L 1994 J. Investing 59–65)—and a measure based on power utility introduced in Leland (Leland H 1999 Financial Analysts J. 27–36). In particular, we investigate the role of the maximum principle in this context, and assess the conditions under which the measures satisfy it. Our results add further motivation for the use of a modified Sortino ratio, by placing it on a sound theoretical foundation. In this light, we discuss its relative merits compared with alternative approaches.  相似文献   

14.
Abstract

The efficiency of an approximate credibility method for predicting outstanding claims in reinsurance, is analysed. The advantage of the approximate method is, that it does not require exact knowledge of the model's second order moments.  相似文献   

15.
Abstract

This paper is an extension of earlier work (Rosenberg 1998; Rosenberg, Andrews, and Lenk 1999; Rosenberg and Griffith 2000) that introduced a statistical control model to supplement current efforts inexpensively to help reduce unnecessary expenditures. The application of the study was to predict the rate of nonacceptable inpatient claims (NACs). In that work, a statistical model was proposed to link information obtained through an expensive audit with inexpensive information that is readily available to estimate the probability that a claim is a NAC. The premise was that a statistical system can be developed to supplement the expensive audit for additional control between audits.

Estimates of the NAC rate obtained from the statistical model are used as input in a statistical monitor to assess whether the NAC rate had changed over time. The statistical monitor is the subject of this paper. The idea is that subgroups of claims can be analyzed inexpensively with the statistical monitor to determine whether any current intervention is required prior to the time of the next scheduled audit, or whether adjustments are needed in the determination of claims to be sampled for the audit. In this study, the estimate for the NAC rate at t 0 is compared against the estimate of the NAC rate at some later time t 1. A decision rule is proposed to assess whether a change in the NAC rate has occurred for that subgroup. The methodology is also applicable to other health care measurements.  相似文献   

16.
Abstract

The classical Bühlmann credibility formula estimates the hypothetical mean of a particular insured, or risk, by a weighted average of the grand mean of the collection of risks with the sample mean of the given insured. If the insured is unfortunate enough to have had large claims in the previous policy period(s), then the estimate of future claims for that risk will also be large. In this paper we provide actuaries with a method for not overly penalizing an unlucky insured while still targeting the goal of accuracy in the estimate. We propose a credibility estimator that minimizes the expectation of a linear combination of a squared-error term and a first-derivative term. The squared-error term measures the accuracy of the estimator, while the first-derivative term constrains the estimator to be close to constant.  相似文献   

17.
18.
Abstract

The following problem is considered: for which p ∈ (0, 1) and completely monotone functions g is g/[p+(1-p)g] completely monotone? This problem is shown to be equivalent to the inverse problem for thinned renewal processes. Some applications to gamma renewal processes are also discussed.  相似文献   

19.
Abstract

This paper shows how Bayesian models within the framework of generalized linear models can be applied to claims reserving. The author demonstrates that this approach is closely related to the Bornhuetter-Ferguson technique. Benktander (1976) and Mack (2000) previously studied the Bornhuetter-Ferguson technique and advocated using credibility models. The present paper uses a Bayesian parametric model within the framework of generalized linear models.  相似文献   

20.
In this paper, a Sparre Andersen risk process with arbitrary interclaim time distribution is considered. We analyze various ruin-related quantities in relation to the expected present value of total operating costs until ruin, which was first proposed by Cai et al. [(2009a). On the expectation of total discounted operating costs up to default and its applications. Advances in Applied Probability 41(2), 495–522] in the piecewise-deterministic compound Poisson risk model. The analysis in this paper is applicable to a wide range of quantities including (i) the insurer's expected total discounted utility until ruin; and (ii) the expected discounted aggregate claim amounts until ruin. On one hand, when claims belong to the class of combinations of exponentials, explicit results are obtained using the ruin theoretic approach of conditioning on the first drop via discounted densities (e.g. Willmot [(2007). On the discounted penalty function in the renewal risk model with general interclaim times. Insurance: Mathematics and Economics 41(1), 17–31]). On the other hand, without any distributional assumption on the claims, we also show that the expected present value of total operating costs until ruin can be expressed in terms of some potential measures, which are common tools in the literature of Lévy processes (e.g. Kyprianou [(2014). Fluctuations of L'evy processes with applications: introductory lectures, 2nd ed. Berlin Heidelberg: Springer-Verlag]). These potential measures are identified in terms of the discounted distributions of ascending and descending ladder heights. We shall demonstrate how the formulas resulting from the two seemingly different methods can be reconciled. The cases of (i) stationary renewal risk model and (ii) surplus-dependent premium are briefly discussed as well. Some interesting invariance properties in the former model are shown to hold true, extending a well-known ruin probability result in the literature. Numerical illustrations concerning the expected total discounted utility until ruin are also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号