首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, Cooray & Ananda (2005) proposed a composite lognormal-Pareto model for use with loss payments data of the sort arising in the actuarial and insurance industries. Their model is based on a lognormal density up to an unknown threshold value and a two-parameter Pareto density thereafter. Here we identify and discuss limitations of this composite lognormal-Pareto model which are likely to severely curtail its potential for practical application to real world data sets. In addition, we present two different composite models based on lognormal and Pareto models in order to address these concerns. The performance of all three composite models is discussed and compared in the context of an example based upon a well-known fire insurance data set.  相似文献   

2.
In this paper, Bayesian methods with both Jeffreys and conjugate priors for estimating parameters of the lognormal–Pareto composite (LPC) distribution are considered. With Jeffreys prior, the posterior distributions for parameters of interest are derived and their properties are described. The conjugate priors are proposed and the conditional posterior distributions are provided. In addition, simulation studies are performed to obtain the upper percentage points of Kolmogorov–Smirnov and Anderson–Darling test statistics. Furthermore, these statistics are used to compare Bayesian and likelihood estimators. In order to clarify and advance the validity of Bayesian and likelihood estimators of the LPC distribution, well-known Danish fire insurance data-set is reanalyzed.  相似文献   

3.
In this paper, a new class of composite model is proposed for modeling actuarial claims data of mixed sizes. The model is developed using the Stoppa distribution and a mode-matching procedure. The use of the Stoppa distribution allows for more flexibility over the thickness of the tail, and the mode-matching procedure gives a simple derivation of the model compositing with a variety of distributions. In particular, the Weibull–Stoppa and the Lognormal–Stoppa distributions are investigated. Their performance is compared with existing composite models in the context of the well-known Danish fire insurance data-set. The results suggest the composite Weibull–Stoppa model outperforms the existing composite models in all seven goodness-of-fit measures considered.  相似文献   

4.
ABSTRACT

Composite models have a long history in actuarial science because they provide a flexible method of curve-fitting for heavy-tailed insurance losses. The ongoing research in this area continuously suggests methodological improvements for existing composite models and considers new composite models. A number of different composite models have been previously proposed in the literature to fit the popular data set related to Danish fire losses. This paper provides the most comprehensive analysis of composite loss models on the Danish fire losses data set to date by evaluating 256 composite models derived from 16 parametric distributions that are commonly used in actuarial science. If not suitably addressed, inevitable computational challenges are encountered when estimating these composite models that may lead to sub-optimal solutions. General implementation strategies are developed for parameter estimation in order to arrive at an automatic way to reach a viable solution, regardless of the specific head and/or tail distributions specified. The results lead to an identification of new well-fitting composite models and provide valuable insights into the selection of certain composite models for which the tail-evaluation measures can be useful in making risk management decisions.  相似文献   

5.
Abstract

In this paper we develop several composite Weibull-Pareto models and suggest their use to model loss payments and other forms of actuarial data. These models all comprise a Weibull distribution up to a threshold point, and some form of Pareto distribution thereafter. They are similar in spirit to some composite lognormal-Pareto models that have previously been considered in the literature. All of these models are applied, and their performance compared, in the context of a real-world fire insurance data set.  相似文献   

6.
Abstract

In this paper I first define the regime-switching lognormal model. Monthly data from the Standard and Poor’s 500 and the Toronto Stock Exchange 300 indices are used to fit the model parameters, using maximum likelihood estimation. The fit of the regime-switching model to the data is compared with other common econometric models, including the generalized autoregressive conditionally heteroskedastic model. The distribution function of the regime-switching model is derived. Prices of European options using the regime-switching model are derived and implied volatilities explored. Finally, an example of the application of the model to maturity guarantees under equity-linked insurance is presented. Equations for quantile and conditional tail expectation (Tail-VaR) risk measures are derived, and a numerical example compares the regime-switching lognormal model results with those using the more traditional lognormal stock return model.  相似文献   

7.
Using data from 40 Danish municipalities on building characteristics, loss prevention technologies, insurance claims, and insurance bids from 2008 to 2019, we investigate whether and how loss prevention technologies influenced contract prices. The insurance bids cover a variety of municipal buildings across a range of risks including fire, water leakage, and building security as well as structure detection and alarm systems. The study shows that the magnitude of historical claims may affect both pricing offers and interest in loss prevention technologies. We do not find that loss prevention technologies have any significant influence on contract price. This suggests inefficiencies in the market from imperfect information.  相似文献   

8.
Abstract

Pet insurance in North America continues to be a growing industry. Unlike in Europe, where some countries have as much as 50% of the pet population insured, very few pets in North America are insured. Pricing practices in the past have relied on market share objectives more so than on actual experience. Pricing still continues to be performed on this basis with little consideration for actuarial principles and techniques. Developments of mortality and morbidity models to be used in the pricing model and new product development are essential for pet insurance. This paper examines insurance claims as experienced in the Canadian market. The time-to-event data are investigated using the Cox’s proportional hazards model. The claim number follows a nonhomogenous Poisson process with covariates. The claim size random variable is assumed to follow a lognormal distribution. These two models work well for aggregate claims with covariates. The first three central moments of the aggregate claims for one insured animal, as well as for a block of insured animals, are derived. We illustrate the models using data collected over an eight-year period.  相似文献   

9.
Option pricing models based on an underlying lognormal distribution typically exhibit volatility smiles or smirks where the implied volatility varies by strike price. To adequately model the underlying distribution, a less restrictive model is needed. A relaxed binomial model is developed here that can account for the skewness of the underlying distribution and a relaxed trinomial model is developed that can account for the skewness and kurtosis of the underlying distribution. The new model incorporates the usual binomial and trinomial tree models as restricted special cases. Unlike previous flexible tree models, the size and probability of jumps are held constant at each node so only minor modifications in existing code for lattice models are needed to implement the new approach. Also, the new approach allows calculating implied skewness and implied kurtosis. Numerical results show that the relaxed binomial and trinomial tree models developed in this study are at least as accurate as tree models based on lognormality when the true underlying distribution is lognormal and substantially more accurate when the underlying distribution is not lognormal.  相似文献   

10.
This paper further considers the composite Lognormal–Pareto model proposed by Cooray & Ananda (2005) and suitably modified by Scollnik (2007). This model is based on a Lognormal density up to an unknown threshold value and a Pareto density thereafter. Instead of using a single threshold value applying uniformly to the whole data set, the model proposed in the present paper allows for heterogeneity with respect to the threshold and let it vary among observations. Specifically, the threshold value for a particular observation is seen as the realization of a positive random variable and the mixed composite Lognormal–Pareto model is obtained by averaging over the population of interest. The performance of the composite Lognormal–Pareto model and of its mixed extension is compared using the well-known Danish fire losses data set.  相似文献   

11.
This paper proposes a simple partial internal model for longevity risk within the Solvency 2 framework. The model is closely linked to the mechanisms associated with the so-called Danish longevity benchmark, where the underlying mortality intensity and the trend is estimated yearly based on mortality experience from the Danish life and pension insurance sector, and on current data from the entire Danish population. Within this model, we derive an estimate for the 99.5% percentile for longevity risk, which differs from the longevity stress of 20% from the standard model. The new stress explicitly reflects the risk associated with unexpected changes in the underlying population mortality intensity on a one-year horizon and with a 99.5% confidence level. In addition, the model contains a component, which quantifies the unsystematic longevity risk associated with a given insurance portfolio. This last component depends on the size of the specific portfolio.  相似文献   

12.
The  hunger for bonus  is a well-known phenomenon in insurance, meaning that the insured does not report all of his accidents to save bonus on his next year's premium. In this article, we assume that the number of accidents is based on a Poisson distribution but that the number of claims is generated by censorship of this Poisson distribution. Then, we present new models for panel count data based on the zero-inflated Poisson distribution. From the claims distributions, we propose an approximation of the accident distribution, which can provide insight into the behavior of insureds. A numerical illustration based on the reported claims of a Spanish insurance company is included to support this discussion.  相似文献   

13.
Abstract

This paper analyzes an explicit return smoothing mechanism which has recently been introduced as part of a new type of pension savings contract that has been offered by Danish life insurers. We establish the payoff function implied by the return smoothing mechanism and show that its probabilistic properties are accurately approximated by a suitably adapted lognormal distribution. The quality of the lognormal approximation is explored via a range of simulation-based numerical experiments, and we point to several other potential practical applications of the paper's theoretical results.  相似文献   

14.
Expected S&P 500 futures price distributions are derived using no-arbitrage option pricing models. These distributions are parameterized both as the lognormal and as a less restrictive three-parameter Burr-XII distribution. The resulting option-based probability assessments display some evidence of miscalibration very near to expiration and far from expiration, but are accurate over intermediate time ranges. The means of the implied price distributions correspond closely to the contemporaneous futures prices for both distributions, although marginally better with the Burr-XII. The Burr-XII distribution also performs better than the lognormal based on calibration statistics, and hence, is used to recalibrate estimated distributions.  相似文献   

15.
We investigate developments in Danish mortality based on data from 1974–1998 working in a two-dimensional model with chronological time and age as the two dimensions. The analyses are done with non-parametric kernel hazard estimation techniques. The only assumption is that the mortality surface is smooth. Cross-validation is applied for optimal bandwidth selection to ensure the proper amount of smoothing to help distinguishing between random and systematic variation in data. A bootstrap technique is used for construction of pointwise confidence bounds. We study the mortality profiles by slicing up the two-dimensional mortality surface. Furthermore we look at aggregated synthetic population metrics as ‘population life expectancy’ and ‘population survival probability’. For Danish women these metrics indicate decreasing mortality with respect to chronological time. The metrics can not directly be used for prediction purposes. However, we suggest that life insurance companies use the estimation technique and the cross-validation for bandwidth selection when analyzing their portfolio mortality. The non-parametric approach may give valuable information prior to developing more sophisticated prediction models for analysis of economic implications arising from mortality changes.  相似文献   

16.
The paper reports empirical tests of the beta model for pricing fixed-income options. The beta model resembles the Black–Scholes model with the lognormal probability distribution replaced by a beta probability distribution. The test is based on 32 817 daily prices of Eurodollar futures options and concludes that the beta model is more accurate than alternative option pricing models.  相似文献   

17.
A three-parameter generalization of the Pareto distribution is presented with density function having a flexible upper tail in modeling loss payment data. This generalized Pareto distribution will be referred to as the Odd Pareto distribution since it is derived by considering the distributions of the odds of the Pareto and inverse Pareto distributions. Basic properties of the Odd Pareto distribution (OP) are studied. Model parameters are estimated using both modified and regular maximum likelihood methods. Simulation studies are conducted to compare the OP with the exponentiated Pareto, Burr, and Kumaraswamy distributions using two different test statistics based on the ml method. Furthermore, two examples from the Norwegian fire insurance claims data-set are provided to illustrate the upper tail flexibility of the distribution. Extensions of the Odd Pareto distribution are also considered to improve the fitting of data.  相似文献   

18.
I formulate expected-utility-maximizing models for health insurance with a single optimal coinsurance (C*) and (separately) a single optimal deductible (D*). While so-doing, I formalize Nyman's challenge to standard welfare-loss models, clarifying when and by how much this alters unadjusted models. Using MEPS-calibrated lognormal distributions and incorporating skewness and kurtosis measures of financial risk, I show how C* shifts as various economic parameters change. For reasonable parameter values, C* < 0.1, much lower than variance-only estimates would conclude. Omitting higher-order risk parameters importantly understates risk and hence understates optimal insurance coverage. I separately develop methods to determine D*, showing that it is approximately a fixed percentage of income that falls as the distribution of financial risks rise. This finding contrasts with existing US public policy regarding high-deductible health plans, which employ fixed deductibles, independent of income.  相似文献   

19.
云南省森林资源十分丰富,是我国重点生态功能区。2010年云南被列为中央政策性森林火灾保险试点省区。云南政策性森林火灾保险在发挥社会管理功能,推动云南林业发展方式转变,推进"森林云南"建设,构建国家云南"桥头堡"战略具有重要的现实意义。目前,云南政策性森林火灾保险逐渐形成了"灾前协同预防——损失全球补偿——促进灾后恢复林业植被生产——推动云南生态功能区建设"四位一体的功能雏形,被国家林业局等部门称为"云南模式"。本文在对阳光财险云南分公司、呈贡森林防火指挥部、林业部门调研基础上,分析云南政策性森林火灾保险的现状,总结云南政策性森林保险运行特点和面临的困难,并提出完善云南政策性森林火灾保险制度的建议。  相似文献   

20.
ABSTRACT

The current paper provides a general approach to construct distortion operators that can price financial and insurance risks. Our approach generalizes the (Wang 2000) transform and recovers multiple distortions proposed in the literature as particular cases. This approach enables designing distortions that are consistent with various pricing principles used in finance and insurance such as no-arbitrage models, equilibrium models and actuarial premium calculation principles. Such distortions allow for the incorporation of risk-aversion, distribution features (e.g. skewness and kurtosis) and other considerations that are relevant to price contingent claims. The pricing performance of multiple distortions obtained through our approach is assessed on CAT bonds data. The current paper is the first to provide evidence that jump-diffusion models are appropriate for CAT bonds pricing, and that natural disaster aversion impacts empirical prices. A simpler distortion based on a distribution mixture is finally proposed for CAT bonds pricing to facilitate the implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号