首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Composite models have a long history in actuarial science because they provide a flexible method of curve-fitting for heavy-tailed insurance losses. The ongoing research in this area continuously suggests methodological improvements for existing composite models and considers new composite models. A number of different composite models have been previously proposed in the literature to fit the popular data set related to Danish fire losses. This paper provides the most comprehensive analysis of composite loss models on the Danish fire losses data set to date by evaluating 256 composite models derived from 16 parametric distributions that are commonly used in actuarial science. If not suitably addressed, inevitable computational challenges are encountered when estimating these composite models that may lead to sub-optimal solutions. General implementation strategies are developed for parameter estimation in order to arrive at an automatic way to reach a viable solution, regardless of the specific head and/or tail distributions specified. The results lead to an identification of new well-fitting composite models and provide valuable insights into the selection of certain composite models for which the tail-evaluation measures can be useful in making risk management decisions.  相似文献   

2.
Abstract

Extreme value theory describes the behavior of random variables at extremely high or low levels. The application of extreme value theory to statistics allows us to fit models to data from the upper tail of a distribution. This paper presents a statistical analysis of advanced age mortality data, using extreme value models to quantify the upper tail of the distribution of human life spans.

Our analysis focuses on mortality data from two sources. Statistics Canada publishes the annual number of deaths in Canada, broken down by angender and age. We use the deaths data from 1949 to 1997 in our analysis. The Japanese Ministry of Health, Labor, and Welfare also publishes detailed annual mortality data, including the 10 oldest reported ages at death in each year. We analyze the Japanese data over the period from 1980 to 2000.

Using the r-largest and peaks-over-threshold approaches to extreme value modeling, we fit generalized extreme value and generalized Pareto distributions to the life span data. Changes in distribution by birth cohort or over time are modeled through the use of covariates. We then evaluate the appropriateness of the fitted models and discuss reasons for their shortcomings. Finally, we use our findings to address the existence of a finite upper bound on the life span distribution and the behavior of the force of mortality at advanced ages.  相似文献   

3.
Abstract

Pet insurance in North America continues to be a growing industry. Unlike in Europe, where some countries have as much as 50% of the pet population insured, very few pets in North America are insured. Pricing practices in the past have relied on market share objectives more so than on actual experience. Pricing still continues to be performed on this basis with little consideration for actuarial principles and techniques. Developments of mortality and morbidity models to be used in the pricing model and new product development are essential for pet insurance. This paper examines insurance claims as experienced in the Canadian market. The time-to-event data are investigated using the Cox’s proportional hazards model. The claim number follows a nonhomogenous Poisson process with covariates. The claim size random variable is assumed to follow a lognormal distribution. These two models work well for aggregate claims with covariates. The first three central moments of the aggregate claims for one insured animal, as well as for a block of insured animals, are derived. We illustrate the models using data collected over an eight-year period.  相似文献   

4.
《Quantitative Finance》2013,13(6):470-480
Abstract

Agent-based models of market dynamics must strike a compromise between the structural assumptions that represent the trading mechanism and the behavioural assumptions that describe the rules by which traders make their decisions. We present a structurally detailed model of an order-driven stock market and show that a minimal set of behavioural assumptions suffices to generate a leptokurtic distribution of short-term log-returns. This result supports the conjecture that the emergence of some statistical properties of financial time series is due to the microstructure of stock markets.  相似文献   

5.
Abstract

Longevity improvements have contributed to widespread underfunding of pension plans and losses in insured annuity portfolios. Insurers might reasonably expect some upside from the effect of lower mortality on their life business. Although mortality improvement scales, such as the Society of Actuaries Scale AA, are widely employed in pension and annuity valuation, the derivation of these scales appears heuristic, leading to problems in deriving meaningful measures of uncertainty. We explore the evidence on mortality trends for the Canadian life insurance companies, data, using stochastic models. We use the more credible population data to benchmark the insured lives data. Finally, we derive a practical, model-based formula for actuaries to incorporate mortality improvement and the associated uncertainty into their calculations.  相似文献   

6.
Abstract

Methods for experience rating of group life contracts are obtained as empirical Bayes or linear Bayes solutions in heterogeneity models. Each master contract is assigned a latent random quantity representing unobservable risk characteristics, which comprise mortality and possibly also age distribution and distribution of the sums insured, depending on the information available about the group. Hierarchical extensions of the set-up are discussed. An application of the theory to data from an authentic portfolio of groups revealed substantial between-group risk variations, hence experience rating could be statistically justified.  相似文献   

7.
Abstract

Dufresne et al. (1991) introduced a general risk model defined as the limit of compound Poisson processes. Such a model is either a compound Poisson process itself or a process with an infinite number of small jumps. Later, in a series of now classical papers, the joint distribution of the time of ruin, the surplus before ruin, and the deficit at ruin was studied (Gerber and Shiu 1997, 1998a, 1998b; Gerber and Landry 1998). These works use the classical and the perturbed risk models and hint that the results can be extended to gamma and inverse Gaussian risk processes.

In this paper we work out this extension to a generalized risk model driven by a nondecreasing Lévy process. Unlike the classical case that models the individual claim size distribution and obtains from it the aggregate claims distribution, here the aggregate claims distribution is known in closed form. It is simply the one-dimensional distribution of a subordinator. Embedded in this wide family of risk models we find the gamma, inverse Gaussian, and generalized inverse Gaussian processes. Expressions for the Gerber-Shiu function are given in some of these special cases, and numerical illustrations are provided.  相似文献   

8.
ABSTRACT

The current paper provides a general approach to construct distortion operators that can price financial and insurance risks. Our approach generalizes the (Wang 2000) transform and recovers multiple distortions proposed in the literature as particular cases. This approach enables designing distortions that are consistent with various pricing principles used in finance and insurance such as no-arbitrage models, equilibrium models and actuarial premium calculation principles. Such distortions allow for the incorporation of risk-aversion, distribution features (e.g. skewness and kurtosis) and other considerations that are relevant to price contingent claims. The pricing performance of multiple distortions obtained through our approach is assessed on CAT bonds data. The current paper is the first to provide evidence that jump-diffusion models are appropriate for CAT bonds pricing, and that natural disaster aversion impacts empirical prices. A simpler distortion based on a distribution mixture is finally proposed for CAT bonds pricing to facilitate the implementation.  相似文献   

9.
The calculus of VaR involves dealing with the confidence level, the time horizon and the true underlying conditional distribution function of asset returns. In this paper, we shall examine the effects of using a specific distribution function that fits well the low-tail data of the observed distribution of asset returns on the accuracy of VaR estimates. In our analysis, we consider some distributional forms characterized by capturing the excess kurtosis characteristic of stock return distributions and we compare their performance using some international stock indices. JEL Classification C15 · G10  相似文献   

10.
Abstract

In this paper we investigate the valuation of investment guarantees in a multivariate (discrete-time) framework. We present how to build multivariate models in general, and we survey the most important multivariate GARCH models. A direct multivariate application of regime-switching models is also discussed, as is the estimation of these models using maximum likelihood and their comparison in a multivariate setting. The computation of the CTE provision is further presented. We have estimated the models with a multivariate dataset (Canada, United States, United Kingdom, and Japan), and we compared the quality of their fit using multiple criteria and tests. We observe that multivariate GARCH models provide a better overall fit than regime-switching models. However, regime-switching models appropriately represent the fat tails of the returns distribution, which is where most GARCH models fail. This leads to significant differences in the value of the CTE provisions, and, in general, provisions computed with regime-switching models are higher. Thus, the results from this multivariate analysis are in line with what was obtained in the literature of univariate models.  相似文献   

11.
Abstract

In this paper, the author reviews some aspects of Bayesian data analysis and discusses how a variety of actuarial models can be implemented and analyzed in accordance with the Bayesian paradigm using Markov chain Monte Carlo techniques via the BUGS (Bayesian inference Using Gibbs Sampling) suite of software packages. The emphasis is placed on actuarial loss models, but other applications are referenced, and directions are given for obtaining documentation for additional worked examples on the World Wide Web.  相似文献   

12.
13.

We propose a fully Bayesian approach to non-life risk premium rating, based on hierarchical models with latent variables for both claim frequency and claim size. Inference is based on the joint posterior distribution and is performed by Markov Chain Monte Carlo. Rather than plug-in point estimates of all unknown parameters, we take into account all sources of uncertainty simultaneously when the model is used to predict claims and estimate risk premiums. Several models are fitted to both a simulated dataset and a small portfolio regarding theft from cars. We show that interaction among latent variables can improve predictions significantly. We also investigate when interaction is not necessary. We compare our results with those obtained under a standard generalized linear model and show through numerical simulation that geographically located and spatially interacting latent variables can successfully compensate for missing covariates. However, when applied to the real portfolio data, the proposed models are not better than standard models due to the lack of spatial structure in the data.  相似文献   

14.
This paper proposes a two-step methodology for Value-at-Risk prediction. The first step involves estimation of a GARCH model using quasi-maximum likelihood estimation and the second step uses model filtered returns with the skewed t distribution of Azzalini and Capitanio [J. R. Stat. Soc. B, 2003, 65, 367–389]. The predictive performance of this method is compared to the single-step joint estimation of the same data generating process, to the well-known GARCH-Evt model and to a comprehensive set of other market risk models. Backtesting results show that the proposed two-step method outperforms most benchmarks including the classical joint estimation method of same data generating process and it performs competitively with respect to the GARCH-Evt model. This paper recommends two robust models to risk managers of emerging market stock portfolios. Both models are estimated in two steps: the GJR-GARCH-Evt model and the two-step GARCH-St model proposed in this study.  相似文献   

15.
Abstract

I have been asked to provide some introductory remarks on the structure of the financial services industry in the 21st century. Each of the panelists will give their own unique perspective on why consolidation is or is not a good idea for the firms where they work. I, on the other hand, would like to take a little time to provide an overview of what I see as competing “models” for the delivery of financial services. I will try to argue that these models are not necessarily mutually exclusive and that certain clientele may be attracted to one or the other. Thus, integration will indeed turn out to be right for some and wrong for others.  相似文献   

16.
Abstract

In the classical Black-Scholes model, the logarithm of the stock price has a normal distribution, which excludes skewness. In this paper we consider models that allow for skewness. We propose an option-pricing formula that contains a linear adjustment to the Black-Scholes formula. This approximation is derived in the shifted Poisson model, which is a complete market model in which the exact option price has some undesirable features. The same formula is obtained in some incomplete market models in which it is assumed that the price of an option is defined by the Esscher method. For a European call option, the adjustment for skewness can be positive or negative, depending on the strike price.  相似文献   

17.
Microscopic simulation models are often evaluated based on visual inspection of the results. This paper presents formal econometric techniques to compare microscopic simulation (MS) models with real-life data. A related result is a methodology to compare different MS models with each other. For this purpose, possible parameters of interest, such as mean returns, or autocorrelation patterns, are classified and characterized. For each class of characteristics, the appropriate techniques are presented. We illustrate the methodology by comparing the MS model developed by He and Li [J. Econ. Dynam. Control, 2007, 31, 3396–3426, Quant. Finance, 2008, 8, 59–79] with actual data.  相似文献   

18.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

19.
Abstract

This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three worked illustrative examples, two of which feature actual insurance claims data.  相似文献   

20.
Abstract

In multiple decrement theory, disability theory, nuptiality and fertility theory, and in other connections, actuaries and demographers often study what in effect is simple Markov chain models. In these models, the transition probabilities, are defined on the basis of the forces of transition (infinitesimal transition probabilities). For various reasons, new models are sometimes constructed by replacing some of the original forces of transition by 0. We shall call the transition probabilities of such a new model partial probabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号