首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Systematic longevity risk is increasingly relevant for public pension schemes and insurance companies that provide life benefits. In view of this, mortality models should incorporate dependence between lives. However, the independent lifetime assumption is still heavily relied upon in the risk management of life insurance and annuity portfolios. This paper applies a multivariate Tweedie distribution to incorporate dependence, which it induces through a common shock component. Model parameter estimation is developed based on the method of moments and generalized to allow for truncated observations. The estimation procedure is explicitly developed for various important distributions belonging to the Tweedie family, and finally assessed using simulation.  相似文献   

2.
Insurance claims data usually contain a large number of zeros and exhibits fat-tail behavior. Misestimation of one end of the tail impacts the other end of the tail of the claims distribution and can affect both the adequacy of premiums and needed reserves to hold. In addition, insured policyholders in a portfolio are naturally non-homogeneous. It is an ongoing challenge for actuaries to be able to build a predictive model that will simultaneously capture these peculiar characteristics of claims data and policyholder heterogeneity. Such models can help make improved predictions and thereby ease the decision-making process. This article proposes the use of spliced regression models for fitting insurance loss data. A primary advantage of spliced distributions is their flexibility to accommodate modeling different segments of the claims distribution with different parametric models. The threshold that breaks the segments is assumed to be a parameter, and this presents an additional challenge in the estimation. Our simulation study demonstrates the effectiveness of using multistage optimization for likelihood inference and at the same time the repercussions of model misspecification. For purposes of illustration, we consider three-component spliced regression models: the first component contains zeros, the second component models the middle segment of the loss data, and the third component models the tail segment of the loss data. We calibrate these proposed models and evaluate their performance using a Singapore auto insurance claims dataset. The estimation results show that the spliced regression model performs better than the Tweedie regression model in terms of tail fitting and prediction accuracy.  相似文献   

3.
In multiplicative pricing of non-life insurance, we report a simulation study of mean square errors (MSEs) of point estimates by (1) the marginal totals method and (2) the Standard Generalized Linear Model (GLM) method of Poisson claim numbers and gamma distributed claim severities with constant coefficient of variation. MSEs per tariff cell are summed with insurance exposures as weights to give a total MSE. This is smallest for Standard GLM under the multiplicative assumption. But with moderate deviations from parameter multiplicativity, the study indicates that the marginal totals method is typically better in the MSE sense when there are many arguments and many claims, i.e. for mass consumer insurance. A method called MVW for confidence intervals, using only the compound Poisson model, is given for the marginal totals method. These confidence intervals are compared with the ones of Standard GLM and the Tweedie method for risk premiums in a simulation study and are found to be mostly the best. The study reports both cover probabilities, which should be close to 0.95 for 95% confidence intervals, and interval lengths, which should be small. The Tweedie method is found to be never better than Standard GLM.  相似文献   

4.
This article proposes a new loss reserving approach, inspired from the collective model of risk theory. According to the collective paradigm, we do not relate payments to specific claims or policies, but we work within a frequency-severity setting, with a number of payments in every cell of the run-off triangle, together with the corresponding paid amounts. Compared to the Tweedie reserving model, which can be seen as a compound sum with Poisson-distributed number of terms and Gamma-distributed summands, we allow here for more general severity distributions, typically mixture models combining a light-tailed component with a heavier-tailed one, including inflation effects. The severity model is fitted to individual observations and not to aggregated data displayed in run-off triangles with a single value in every cell. In that respect, the modeling approach appears to be a powerful alternative to both the crude traditional aggregated approach based on triangles and the extremely detailed individual reserving approach developing each and every claim separately. A case study based on a motor third-party liability insurance portfolio observed over 2004–2014 is used to illustrate the relevance of the proposed approach.  相似文献   

5.
We propose a multidimensional risk model where the common shock affecting all classes of insurance business is arriving according to a non-homogeneous periodic Poisson process. In this multivariate setting, we derive upper bounds of Lundberg-type for the probability that ruin occurs in all classes simultaneously using the martingale approach via piecewise deterministic Markov processes theory. These results are numerically illustrated in a bivariate risk model, where the beta-shape periodic claim intensity function is considered. Under the assumption of dependent heavy-tailed claims, asymptotic bounds for the finite-time ruin probabilities associated to three types of ruin in this multivariate framework are investigated.  相似文献   

6.
The prediction of the outstanding loss liabilities for a non-life run-off portfolio as well as the quantification of the prediction error is one of the most important actuarial tasks in non-life insurance. In this paper we consider this prediction problem in a multivariate context. More precisely, we derive the predictive distribution of the claims reserves simultaneously for several correlated run-off portfolios in the framework of the Chain-ladder claims reserving method for several correlated run-off portfolios.  相似文献   

7.
That the returns on financial assets and insurance claims are not well described by the multivariate normal distribution is generally acknowledged in the literature. This paper presents a review of the use of the skew-normal distribution and its extensions in finance and actuarial science, highlighting known results as well as potential directions for future research. When skewness and kurtosis are present in asset returns, the skew-normal and skew-Student distributions are natural candidates in both theoretical and empirical work. Their parameterization is parsimonious and they are mathematically tractable. In finance, the distributions are interpretable in terms of the efficient markets hypothesis. Furthermore, they lead to theoretical results that are useful for portfolio selection and asset pricing. In actuarial science, the presence of skewness and kurtosis in insurance claims data is the main motivation for using the skew-normal distribution and its extensions. The skew-normal has been used in studies on risk measurement and capital allocation, which are two important research fields in actuarial science. Empirical studies consider the skew-normal distribution because of its flexibility, interpretability, and tractability. This paper comprises four main sections: an overview of skew-normal distributions; a review of skewness in finance, including asset pricing, portfolio selection, time series modeling, and a review of its applications in insurance, in which the use of alternative distribution functions is widespread. The final section summarizes some of the challenges associated with the use of skew-elliptical distributions and points out some directions for future research.  相似文献   

8.
We present a fully data driven strategy to incorporate continuous risk factors and geographical information in an insurance tariff. A framework is developed that aligns flexibility with the practical requirements of an insurance company, the policyholder and the regulator. Our strategy is illustrated with an example from property and casualty (P&C) insurance, namely a motor insurance case study. We start by fitting generalized additive models (GAMs) to the number of reported claims and their corresponding severity. These models allow for flexible statistical modeling in the presence of different types of risk factors: categorical, continuous, and spatial risk factors. The goal is to bin the continuous and spatial risk factors such that categorical risk factors result which captures the effect of the covariate on the response in an accurate way, while being easy to use in a generalized linear model (GLM). This is in line with the requirement of an insurance company to construct a practical and interpretable tariff that can be explained easily to stakeholders. We propose to bin the spatial risk factor using Fisher’s natural breaks algorithm and the continuous risk factors using evolutionary trees. GLMs are fitted to the claims data with the resulting categorical risk factors. We find that the resulting GLMs approximate the original GAMs closely, and lead to a very similar premium structure.  相似文献   

9.
Insurance claims have deductibles, which must be considered when pricing for insurance premium. The deductible may cause censoring and truncation to the insurance claims. However, modeling the unobserved response variable using maximum likelihood in this setting may be a challenge in practice. For this reason, a practitioner may perform a regression using the observed response, in order to calculate the deductible rates using the regression coefficients. A natural question is how well this approach performs, and how it compares to the theoretically correct approach to rating the deductibles. Also, a practitioner would be interested in a systematic review of the approaches to modeling the deductible rates. In this article, an overview of deductible ratemaking is provided, and the pros and cons of two deductible ratemaking approaches are compared: the regression approach and the maximum likelihood approach. The regression approach turns out to have an advantage in predicting aggregate claims, whereas the maximum likelihood approach has an advantage when calculating theoretically correct relativities for deductible levels beyond those observed by empirical data. For demonstration, loss models are fit to the Wisconsin Local Government Property Insurance Fund data, and examples are provided for the ratemaking of per-loss deductibles offered by the fund. The article discovers that the regression approach is actually a single-parameter approximation to the true relativity curve. A comparison of selected models from the generalized beta family discovers that the usage of long-tail severity distributions may improve the deductible rating, while advanced frequency models such as 01-inflated models may have limited advantages due to estimation issues under censoring and truncation. In addition, in this article, models for specific peril types are combined to improve the ratemaking.  相似文献   

10.
Longitudinal modeling of insurance claim counts using jitters   总被引:1,自引:0,他引:1  
Modeling insurance claim counts is a critical component in the ratemaking process for property and casualty insurance. This article explores the usefulness of copulas to model the number of insurance claims for an individual policyholder within a longitudinal context. To address the limitations of copulas commonly attributed to multivariate discrete data, we adopt a ‘jittering’ method to the claim counts which has the effect of continuitizing the data. Elliptical copulas are proposed to accommodate the intertemporal nature of the ‘jittered’ claim counts and the unobservable subject-specific heterogeneity on the frequency of claims. Observable subject-specific effects are accounted in the model by using available covariate information through a regression model. The predictive distribution together with the corresponding credibility of claim frequency can be derived from the model for ratemaking and risk classification purposes. For empirical illustration, we analyze an unbalanced longitudinal dataset of claim counts observed from a portfolio of automobile insurance policies of a general insurer in Singapore. We further establish the validity of the calibrated copula model, and demonstrate that the copula with ‘jittering’ method outperforms standard count regression models.  相似文献   

11.
Insurance companies typically face multiple sources (types) of claims. Therefore, modelling dependencies among different types of risks is extremely important for evaluating the aggregate claims of an insurer. In this paper, we first introduce a multivariate aggregate claims model, which allows dependencies among claim numbers as well as dependencies among claim sizes. For this proposed model, we derive recursive formulas for the joint probability functions of different types of claims. In addition, we extend the concept of exponential tilting to the multivariate fast Fourier transform and use it to compute the joint probability functions of the various types of claims. We provide numerical examples to compare the accuracy and efficiency of the two computation methods.  相似文献   

12.
The accurate prediction of long-term care insurance (LTCI) mortality, lapse, and claim rates is essential when making informed pricing and risk management decisions. Unfortunately, academic literature on the subject is sparse and industry practice is limited by software and time constraints. In this article, we review current LTCI industry modeling methodology, which is typically Poisson regression with covariate banding/modification and stepwise variable selection. We test the claim that covariate banding improves predictive accuracy, examine the potential downfalls of stepwise selection, and contend that the assumptions required for Poisson regression are not appropriate for LTCI data. We propose several alternative models specifically tailored toward count responses with an excess of zeros and overdispersion. Using data from a large LTCI provider, we evaluate the predictive capacity of random forests and generalized linear and additive models with zero-inflated Poisson, negative binomial, and Tweedie errors. These alternatives are compared to previously developed Poisson regression models.

Our study confirms that variable modification is unnecessary at best and automatic stepwise model selection is dangerous. After demonstrating severe overprediction of LTCI mortality and lapse rates under the Poisson assumption, we show that a Tweedie GLM enables much more accurate predictions. Our Tweedie regression models improve average predictive accuracy (measured by several prediction error statistics) over Poisson regression models by as much as four times for mortality rates and 17 times for lapse rates.  相似文献   


13.
This article provides a detailed analysis of the operation of the National Flood Insurance Program (NFIP) in Florida, which accounts for 40 percent of the NFIP portfolio. We study the demand for flood insurance with a data set of more than 7.5 million NFIP policies‐in‐force (the largest ever studied) for the years 2000–2005, as well as all NFIP claims filed in Florida. We answer four questions: What are the characteristics of the buyers of flood insurance? What types of contracts (deductibles and coverage levels) are purchased? What are the determinants of claims payments? How are prices determined and how much does NFIP insurance cost?  相似文献   

14.
ABSTRACT

The Tweedie family, which is classified by the choice of power unit variance function, includes heavy tailed distributions, and as such could be of significant relevance to actuarial science. The class includes the Normal, Poisson, Gamma, Inverse Gaussian, Stable and Compound Poisson distributions. In this study, we explore the intrinsic objective Bayesian point estimator for the mean value of the Tweedie family based on the intrinsic discrepancy loss function – which is an inherent loss function arising only from the underlying distribution or model, without any subjective considerations – and the Jeffreys prior distribution, which is designed to express absence of information about the quantity of interest. We compare the proposed point estimator with the Bayes estimator, which is the posterior mean based on quadratic loss function and the Jeffreys prior distribution. We carry a numerical study to illustrate the methodology in the context of the Inverse Gaussian model, which is fully unexplored in this novel context, and which is useful to insurance contracts.  相似文献   

15.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

16.
We introduce a model to discuss an optimal investment problem of an insurance company using a game theoretic approach. The model is general enough to include economic risk, financial risk, insurance risk, and model risk. The insurance company invests its surplus in a bond and a stock index. The interest rate of the bond is stochastic and depends on the state of an economy described by a continuous-time, finite-state, Markov chain. The stock index dynamics are governed by a Markov, regime-switching, geometric Brownian motion modulated by the chain. The company receives premiums and pays aggregate claims. Here the aggregate insurance claims process is modeled by either a Markov, regime-switching, random measure or a Markov, regime-switching, diffusion process modulated by the chain. We adopt a robust approach to model risk, or uncertainty, and generate a family of probability measures using a general approach for a measure change to incorporate model risk. In particular, we adopt a Girsanov transform for the regime-switching Markov chain to incorporate model risk in modeling economic risk by the Markov chain. The goal of the insurance company is to select an optimal investment strategy so as to maximize either the expected exponential utility of terminal wealth or the survival probability of the company in the ‘worst-case’ scenario. We formulate the optimal investment problems as two-player, zero-sum, stochastic differential games between the insurance company and the market. Verification theorems for the HJB solutions to the optimal investment problems are provided and explicit solutions for optimal strategies are obtained in some particular cases.  相似文献   

17.
Mathematical claims modelling is a traditional art of actuarial practice. However, changing expectations concerning the complexity and efficiency of mathematical models have lead to numerous developments in the recent years, particularly in the area of multivariate models. This goes hand in hand with questions about what is ?possible“ and what is ?meaningful“. Due to sophisticated improvements of the classical models, the one-dimensional situation is well-tractable today whenever sufficient data material is available. A more difficult task is the modelling of situations where data are sparse or, alternatively, highly complex. Here sometimes theoretical arguments have to be used for justification, as in the area of large claims or extreme values. A more modern approach to multivariate situations is via copulas which allow for a separation of the problem of marginal distributions and the joint dependence structure. However, so far there is little experience with such tools in the insurance world, which makes further scientific research necessary.  相似文献   

18.
Abstract

The correlation among multiple lines of business plays an important role in quantifying the uncertainty of loss reserves for insurance portfolios. To accommodate correlation, most multivariate loss-reserving methods focus on the pairwise association between corresponding cells in multiple run-off triangles. However, such practice usually relies on the independence assumption across accident years and ignores the calendar year effects that could affect all open claims simultaneously and induce dependencies among loss triangles. To address this issue, we study a Bayesian log-normal model in the prediction of outstanding claims for dependent lines of business. In addition to the pairwise correlation, our method allows for an explicit examination of the correlation due to common calendar year effects. Further, different specifications of the calendar year trend are considered to reflect valuation actuaries’ prior knowledge of claim development. In a case study, we analyze an insurance portfolio of personal and commercial auto lines from a major U.S. property-casualty insurer. It is shown that the incorporation of calendar year effects improves model fit significantly, though it contributes substantively to the predictive variability. The availability of the realizations of predicted claims permits us to perform a retrospective test, which suggests that extra prediction uncertainty is indispensable in modern risk management practices.  相似文献   

19.
Traditionally, insurance medicine has made significant contributions to sound underwriting and to the rise of insurer's margin of risk ratio. However, it is may be less effective in emerging markets, because emerging markets have few clinical follow-up studies, and there is a lack of medicoactuarial research. Making a medical expert's opinion to insurance claim administration is called a medical claims review. It is subdivided into medical verification and advice for claims staff. In addition to medical risk selection, it is a new field of insurance medicine in emerging insurance markets, which are often characterized by assertive insurance consumers. Medical contributions at the stage of claims adjudication compare the coverage provided in the product, with the information provided in the claims, based on medical records and the agreement between them. This is called medical verification. The insurance doctors can also use their medical knowledge to help the claim staff with informing claimants about the medical basis of claim decisions.  相似文献   

20.

Explicit, two-sided bounds are derived for the probability of ruin of an insurance company, whose premium income is represented by an arbitrary, increasing real function, the claims are dependent, integer valued r.v.s and their inter-occurrence times are exponentially, non-identically distributed. It is shown, that the two bounds coincide when the moments of the claims form a Poisson point process. An expression for the survival probability is further derived in this special case, assuming that the claims are integer valued, i.i.d. r.v.s. This expression is compared with a different formula, obtained recently by Picard & Lefevre (1997) in terms of generalized Appell polynomials. The particular case of constant rate premium income and non-zero initial capital is considered. A connection of the survival probability to multivariate B -splines is also established.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号