首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Insurance claims have deductibles, which must be considered when pricing for insurance premium. The deductible may cause censoring and truncation to the insurance claims. However, modeling the unobserved response variable using maximum likelihood in this setting may be a challenge in practice. For this reason, a practitioner may perform a regression using the observed response, in order to calculate the deductible rates using the regression coefficients. A natural question is how well this approach performs, and how it compares to the theoretically correct approach to rating the deductibles. Also, a practitioner would be interested in a systematic review of the approaches to modeling the deductible rates. In this article, an overview of deductible ratemaking is provided, and the pros and cons of two deductible ratemaking approaches are compared: the regression approach and the maximum likelihood approach. The regression approach turns out to have an advantage in predicting aggregate claims, whereas the maximum likelihood approach has an advantage when calculating theoretically correct relativities for deductible levels beyond those observed by empirical data. For demonstration, loss models are fit to the Wisconsin Local Government Property Insurance Fund data, and examples are provided for the ratemaking of per-loss deductibles offered by the fund. The article discovers that the regression approach is actually a single-parameter approximation to the true relativity curve. A comparison of selected models from the generalized beta family discovers that the usage of long-tail severity distributions may improve the deductible rating, while advanced frequency models such as 01-inflated models may have limited advantages due to estimation issues under censoring and truncation. In addition, in this article, models for specific peril types are combined to improve the ratemaking.  相似文献   

2.
The Tweedie distribution, featured with a mass probability at zero, is a convenient tool for insurance claims modeling and pure premium determination in general insurance. Motivated by the fact that an insurance policy typically provides multiple types of coverage, we propose a copula-based multivariate Tweedie regression for modeling the semi-continuous claims while accommodating the association among different types. The proposed approach also allows for dispersion modeling, resulting in a multivariate version of the double generalized linear model. We demonstrate the application in insurance ratemaking using a portfolio of policyholders of automobile insurance from the state of Massachusetts in the United States.  相似文献   

3.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

4.
In this paper, models for claim frequency and average claim size in non-life insurance are considered. Both covariates and spatial random effects are included allowing the modelling of a spatial dependency pattern. We assume a Poisson model for the number of claims, while claim size is modelled using a Gamma distribution. However, in contrast to the usual compound Poisson model, we allow for dependencies between claim size and claim frequency. A fully Bayesian approach is followed, parameters are estimated using Markov Chain Monte Carlo (MCMC). The issue of model comparison is thoroughly addressed. Besides the deviance information criterion and the predictive model choice criterion, we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. We give an application to a comprehensive data set from a German car insurance company. The inclusion of spatial effects significantly improves the models for both claim frequency and claim size, and also leads to more accurate predictions of the total claim sizes. Further, we detect significant dependencies between the number of claims and claim size. Both spatial and number of claims effects are interpreted and quantified from an actuarial point of view.  相似文献   

5.
Abstract

This paper deals with the prediction of the amount of outstanding automobile claims that an insurance company will pay in the near future. We consider various competing models using Bayesian theory and Markov chain Monte Carlo methods. Claim counts are used to add a further hierarchical stage in the model with log-normally distributed claim amounts and its corresponding state space version. This way, we incorporate information from both the outstanding claim amounts and counts data resulting in new model formulations. Implementation details and illustrations with real insurance data are provided.  相似文献   

6.
This paper introduces non-parametric estimators for upper and lower tail dependence whose confidence intervals are obtained with a bootstrap method. We call these estimators ‘naïve estimators’ as they represent a discretization of Joe's formulae linking copulas to tail dependence. We apply the methodology to an empirical data set composed of three composite indexes for the three Tigers (Thailand, Malaysia and Indonesia). The extremes show a dependence structure which is symmetric for the Thai and Malaysian markets and asymmetric for the Thai and Indonesian markets and for the Malaysian and the Indonesian markets. Using these results we estimate the copula (which belongs to the Student or Archimedean copula families) for each pair of markets by two methods. Finally, we provide risk measurements using the best copula associated with each pair of markets.  相似文献   

7.
A model for the statistical analysis of the total amount of insurance paid out on a policy is developed and applied. The model simultaneously deals with the number of claims (zero or more) and the amount of each claim. The number of claims is from a Poisson-based discrete distribution. Individual claim sizes are from a continuous right skewed distribution. The resulting distribution of total claim size is a mixed discrete-continuous model, with positive probability of a zero claim. The means and dispersions of the claim frequency and claim size distribution are modeled in terms of risk factors. The model is applied to a car insurance data set.  相似文献   

8.
The vast literature on stochastic loss reserving concentrates on data aggregated in run-off triangles. However, a triangle is a summary of an underlying data-set with the development of individual claims. We refer to this data-set as ‘micro-level’ data. Using the framework of Position Dependent Marked Poisson Processes) and statistical tools for recurrent events, a data-set is analyzed with liability claims from a European insurance company. We use detailed information of the time of occurrence of the claim, the delay between occurrence and reporting to the insurance company, the occurrences of payments and their sizes, and the final settlement. Our specifications are (semi)parametric and our approach is likelihood based. We calibrate our model to historical data and use it to project the future development of open claims. An out-of-sample prediction exercise shows that we obtain detailed and valuable reserve calculations. For the case study developed in this paper, the micro-level model outperforms the results obtained with traditional loss reserving methods for aggregate data.  相似文献   

9.
This paper proposes a new approach to measure dependencies in multivariate financial data. Data in finance and insurance often cover a long time period. Therefore, the economic factors may induce some changes within the dependence structure. Recently, two methods have been proposed using copulas to analyse such changes. The first approach investigates changes within the parameters of the copula. The second determines the sequence of copulas using moving windows. In this paper we take into account the non-stationarity of the data and analyse the impact of (1) time-varying parameters for a copula family, and (2) the sequence of copulas, on the computations of the VaR and ES measures. We propose tests based on conditional copulas and the goodness-of-fit to decide the type of change, and further give the corresponding change analysis. We illustrate our approach using the Standard & Poor 500 and Nasdaq indices in order to compute risk measures using the two previous methods.  相似文献   

10.
Recently, different bivariate Poisson regression models have been used in the actuarial literature to make an a priori ratemaking taking into account the dependence between two types of claims. A natural extension for these models is to consider a posteriori ratemaking (i.e. experience rating models) that also relaxes the independence assumption. We introduce here two bivariate experience rating models that integrate the a priori ratemaking based on the bivariate Poisson regression models, extending the existing literature for the univariate case to the bivariate case. These bivariate experience rating models are applied to an automobile insurance claims data-set to analyse the consequences for posterior premiums when the independence assumption is relaxed. The main finding is that the a posteriori risk factors obtained with the bivariate experience rating models are significantly lower than those factors derived under the independence assumption.  相似文献   

11.
This article introduces to the statistical and insurance literature a mathematical technique for an a priori classification of objects when no training sample exists for which the exact correct group membership is known. The article also provides an example of the empirical application of the methodology to fraud detection for bodily injury claims in automobile insurance. With this technique, principal component analysis of RIDIT scores (PRIDIT), an insurance fraud detector can reduce uncertainty and increase the chances of targeting the appropriate claims so that an organization will be more likely to allocate investigative resources efficiently to uncover insurance fraud. In addition, other (exogenous) empirical models can be validated relative to the PRIDIT‐derived weights for optimal ranking of fraud/nonfraud claims and/or profiling. The technique at once gives measures of the individual fraud indicator variables’ worth and a measure of individual claim file suspicion level for the entire claim file that can be used to cogently direct further fraud investigation resources. Moreover, the technique does so at a lower cost than utilizing human insurance investigators, or insurance adjusters, but with similar outcomes. More generally, this technique is applicable to other commonly encountered managerial settings in which a large number of assignment decisions are made subjectively based on ‘‘clues,‘’ which may change dramatically over time. This article explores the application of these techniques to injury insurance claims for automobile bodily injury in detail.  相似文献   

12.
This article proposes a computer‐intensive methodology to build bonus‐malus scales in automobile insurance. The claim frequency model is taken from Pinquet, Guillén, and Bolancé (2001) . It accounts for overdispersion, heteroskedasticity, and dependence among repeated observations. Explanatory variables are taken into account in the determination of the relativities, yielding an integrated automobile ratemaking scheme. In that respect, it complements the study of Taylor (1997) .  相似文献   

13.
Copulas with a full-range tail dependence property can cover the widest range of positive dependence in the tail, so that a regression model can be built accounting for dynamic tail dependence patterns between variables. We propose a model that incorporates both regression on each marginal of bivariate response variables and regression on the dependence parameter for the response variables. An ACIG copula that possesses the full-range tail dependence property is implemented in the regression analysis. Comparisons between regression analysis based on ACIG and Gumbel copulas are conducted, showing that the ACIG copula is generally better than the Gumbel copula when there is intermediate upper tail dependence. A simulation study is conducted to illustrate that dynamic tail dependence structures between loss and ALAE can be captured by using the one-parameter ACIG copula. Finally, we apply the ACIG and Gumbel regression models for a dataset from the U.S. Medical Expenditure Panel Survey. The empirical analysis suggests that the regression model with the ACIG copula improves the assessment of high-risk scenarios, especially for aggregated dependent risks.  相似文献   

14.
15.
In a series of two papers, this paper and the one by Ozkok et al. (Modelling critical illness claim diagnosis rates II: results), we develop statistical models to be used as a framework for estimating, and graduating, Critical Illness (CI) insurance diagnosis rates. We use UK data for 1999–2005 supplied by the Continuous Mortality Investigation (CMI) to illustrate their use. In this paper, we set out the basic methodology. In particular, we set out some models, we describe the data available to us and we discuss the statistical distribution of estimators proposed for CI diagnosis inception rates. A feature of CI insurance is the delay, on average about 6 months but in some cases much longer, between the diagnosis of an illness and the settlement of the subsequent claim. Modelling this delay, the so-called Claim Delay Distribution, is a necessary first step in the estimation of the claim diagnosis rates and this is discussed in the present paper. In the subsequent paper, we derive and discuss diagnosis rates for CI claims from ‘all causes’ and also from specific causes.  相似文献   

16.
Abstract

What types of households own life insurance? Who owns term life and who owns whole life insurance? These are questions of great interest to insurers that operate in a highly competitive market. To answer these questions, we jointly examine household demand of two types of insurance, term and whole life, using data from the Survey of Consumer Finances, a probability sample of the U.S. population. We model both the frequency and the severity of demand for insurance, building on the work of Lin and Grace by using explanatory variables that they developed. For the frequency portion, the household decisions about whether to own term and whole life insurance are modeled simultaneously with a bivariate probit regression model. Given ownership of life insurance by a household, the amounts of insurance are analyzed using generalized linear models with a normal copula. The copula permits the bivariate modeling of insurance amounts for households who own both term and whole life insurance, about 20% of our sample. These models allow analysts to predict who owns life insurance and how much they own, an important input to the marketing process.

Moreover, our findings suggest that household demand for term and whole life insurance is jointly determined. After controlling for explanatory variables, there exists a negative relationship for a household’s decision to own both whole and term life insurance (the frequency part) and a positive relationship for the amount of insurance purchased (the severity part). This indicates that the greater the probability of holding one type, the smaller the probability of holding the other type of life insurance. However, higher demand for both types of insurance exists when a household decides to own both. This mixed effect extends prior work that established a negative relationship, suggesting that term life insurance and whole life insurance are substitutes for one another. In contrast, our findings reveal that the ownership decision involves substitution, but, for households owning both types of insurance, amounts are positively related. Therefore, term and whole life insurance are substitutes in the frequency yet complements in the severity.  相似文献   

17.
《Quantitative Finance》2013,13(4):231-250
Abstract

Using one of the key properties of copulas that they remain invariant under an arbitrary monotonic change of variable, we investigate the null hypothesis that the dependence between financial assets can be modelled by the Gaussian copula. We find that most pairs of currencies and pairs of major stocks are compatible with the Gaussian copula hypothesis, while this hypothesis can be rejected for the dependence between pairs of commodities (metals). Notwithstanding the apparent qualification of the Gaussian copula hypothesis for most of the currencies and the stocks, a non-Gaussian copula, such as the Student copula, cannot be rejected if it has sufficiently many ‘degrees of freedom’. As a consequence, it may be very dangerous to embrace blindly the Gaussian copula hypothesis, especially when the coefficient of correlation between the pairs of assets is too high, such that the tail dependence neglected by the Gaussian copula can became large, leading to the ignoring of extreme events which may occur in unison.  相似文献   

18.
Abstract

Pet insurance in North America continues to be a growing industry. Unlike in Europe, where some countries have as much as 50% of the pet population insured, very few pets in North America are insured. Pricing practices in the past have relied on market share objectives more so than on actual experience. Pricing still continues to be performed on this basis with little consideration for actuarial principles and techniques. Developments of mortality and morbidity models to be used in the pricing model and new product development are essential for pet insurance. This paper examines insurance claims as experienced in the Canadian market. The time-to-event data are investigated using the Cox’s proportional hazards model. The claim number follows a nonhomogenous Poisson process with covariates. The claim size random variable is assumed to follow a lognormal distribution. These two models work well for aggregate claims with covariates. The first three central moments of the aggregate claims for one insured animal, as well as for a block of insured animals, are derived. We illustrate the models using data collected over an eight-year period.  相似文献   

19.
In the context of an insurance portfolio which provides dividend income for the insurance company’s shareholders, an important problem in risk theory is how the premium income will be paid to the shareholders as dividends according to a barrier strategy until the next claim occurs whenever the surplus attains the level of ‘barrier’. In this paper, we are concerned with the estimation of optimal dividend barrier, defined as the level of the barrier that maximizes the expected discounted dividends until ruin, under the widely used compound Poisson model as the aggregate claims process. We propose a semi-parametric statistical procedure for estimation of the optimal dividend barrier, which is critically needed in applications. We first construct a consistent estimator of the objective function that is complexly related to the expected discounted dividends and then the estimated optimal dividend barrier as the minimizer of the estimated objective function. In theory, we show that the constructed estimator of the optimal dividend barrier is statistically consistent. Numerical experiments by both simulated and real data analyses demonstrate that the proposed estimators work reasonably well with an appropriate size of samples.  相似文献   

20.
ABSTRACT

The precise measurement of the association between asset returns is important for financial investors and risk managers. In this paper, we focus on a recent class of association models: Dynamic Conditional Score (DCS) copula models. Our contributions are the following: (i) We compare the statistical performance of several DCS copulas for several portfolios. We study the Clayton, rotated Clayton, Frank, Gaussian, Gumbel, rotated Gumbel, Plackett and Student's t copulas. We find that the DCS model with the Student's t copula is the most parsimonious model. (ii) We demonstrate that the copula score function discounts extreme observations. (iii) We jointly estimate the marginal distributions and the copula, by using the Maximum Likelihood method. We use DCS models for mean, volatility and association of asset returns. (iv) We estimate robust DCS copula models, for which the probability of a zero return observation is not necessarily zero. (v) We compare different patterns of association in different regions of the distribution for different DCS copulas, by using density contour plots and Monte Carlo (MC) experiments. (vi) We undertake a portfolio performance study with the estimation and backtesting of MC Value-at-Risk for the DCS model with the Student's t copula.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号