首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper is inspired by two papers of Riegel who proposed to consider the paid and incurred loss development of the individual claims and to use a filter in order to separate small and large claims and to construct loss development squares for the paid or incurred small or large claims and for the numbers of large claims. We show that such loss development squares can be constructed from collective models for the accident years. Moreover, under certain assumptions on these collective models, we show that a development pattern exists for each of these loss development squares, which implies that various methods of loss reserving can be used for prediction and that the chain ladder method is a natural method for the prediction of future numbers of large claims.  相似文献   

2.
We consider a simple Poisson cluster model for the payment numbers and the corresponding total payments for insurance claims arriving in a given year. Due to the Poisson structure one can give reasonably explicit expressions for the prediction of the payment numbers and total payments in future periods given the past observations of the payment numbers. One can also derive reasonably explicit expressions for the corresponding prediction errors. In the (a, b) class of Panjer's claim size distributions, these expressions can be evaluated by simple recursive algorithms. We study the conditions under which the predictions are asymptotically linear as the number of past payments becomes large. We also demonstrate that, in other regimes, the prediction may be far from linear. For example, a staircase-like pattern may arise as well. We illustrate how the theory works on real-life data, also in comparison with the chain ladder method.  相似文献   

3.
We connect classical chain ladder to granular reserving. This is done by defining explicitly how the classical run-off triangles are generated from individual iid observations in continuous time. One important result is that the development factors have a one to one correspondence to a histogram estimator of a hazard running in reversed development time. A second result is that chain ladder has a systematic bias if the row effect has not the same distribution when conditioned on any of the aggregated periods. This means that the chain ladder assumptions on one level of aggregation, say yearly, are different from the chain ladder assumptions when aggregated in quarters and the optimal level of aggregation is a classical bias variance trade-off depending on the data-set. We introduce smooth development factors arising from non-parametric hazard kernel smoother improving the estimation significantly.  相似文献   

4.
The vast literature on stochastic loss reserving concentrates on data aggregated in run-off triangles. However, a triangle is a summary of an underlying data-set with the development of individual claims. We refer to this data-set as ‘micro-level’ data. Using the framework of Position Dependent Marked Poisson Processes) and statistical tools for recurrent events, a data-set is analyzed with liability claims from a European insurance company. We use detailed information of the time of occurrence of the claim, the delay between occurrence and reporting to the insurance company, the occurrences of payments and their sizes, and the final settlement. Our specifications are (semi)parametric and our approach is likelihood based. We calibrate our model to historical data and use it to project the future development of open claims. An out-of-sample prediction exercise shows that we obtain detailed and valuable reserve calculations. For the case study developed in this paper, the micro-level model outperforms the results obtained with traditional loss reserving methods for aggregate data.  相似文献   

5.
In this paper we present two different approaches to how one can include diagonal effects in non-life claims reserving based on run-off triangles. Empirical analyses suggest that the approaches in Zehnwirth (2003) and Kuang et al. (2008a, 2008b) do not work well with low-dimensional run-off triangles because estimation uncertainty is too large. To overcome this problem we consider similar models with a smaller number of parameters. These are closely related to the framework considered in Verbeek (1972) and Taylor (1977, 2000); the separation method. We explain that these models can be interpreted as extensions of the multiplicative Poisson models introduced by Hachemeister & Stanard (1975) and Mack (1991).  相似文献   

6.
Buchwalder et al. (2006) have illustrated that there are different approaches for the derivation of an estimate for the parameter estimation error in the distribution-free chain ladder reserving method. In this paper, we demonstrate that these approaches provide estimates that are close to each other for typical parameters. This is carried out by proving upper and lower bounds.  相似文献   

7.
Abstract

Traditional claims-reserving techniques are based on so-called run-off triangles containing aggregate claim figures. Such a triangle provides a summary of an underlying data set with individual claim figures. This contribution explores the interpretation of the available individual data in the framework of longitudinal data analysis. Making use of the theory of linear mixed models, a flexible model for loss reserving is built. Whereas traditional claims-reserving techniques don’t lead directly to predictions for individual claims, the mixed model enables such predictions on a sound statistical basis with, for example, confidence regions. Both a likelihood-based as well as a Bayesian approach are considered. In the frequentist approach, expressions for the mean squared error of prediction of an individual claim reserve, origin year reserves, and the total reserve are derived. Using MCMC techniques, the Bayesian approach allows simulation from the complete predictive distribution of the reserves and the calculation of various risk measures. The paper ends with an illustration of the suggested techniques on a data set from practice, consisting of Belgian automotive third-party liability claims. The results for the mixed-model analysis are compared with those obtained from traditional claims-reserving techniques for run-off triangles. For the data under consideration, the lognormal mixed model fits the observed individual data well. It leads to individual predictions comparable to those obtained by applying chain-ladder development factors to individual data. Concerning the predictive power on the aggregate level, the mixed model leads to reasonable predictions and performs comparable to and often better than the stochastic chain ladder for aggregate data.  相似文献   

8.
9.
This article proposes a new loss reserving approach, inspired from the collective model of risk theory. According to the collective paradigm, we do not relate payments to specific claims or policies, but we work within a frequency-severity setting, with a number of payments in every cell of the run-off triangle, together with the corresponding paid amounts. Compared to the Tweedie reserving model, which can be seen as a compound sum with Poisson-distributed number of terms and Gamma-distributed summands, we allow here for more general severity distributions, typically mixture models combining a light-tailed component with a heavier-tailed one, including inflation effects. The severity model is fitted to individual observations and not to aggregated data displayed in run-off triangles with a single value in every cell. In that respect, the modeling approach appears to be a powerful alternative to both the crude traditional aggregated approach based on triangles and the extremely detailed individual reserving approach developing each and every claim separately. A case study based on a motor third-party liability insurance portfolio observed over 2004–2014 is used to illustrate the relevance of the proposed approach.  相似文献   

10.
Applications of state space models and the Kalman filter are comparatively underrepresented in stochastic claims reserving. This is usually caused by their high complexity due to matrix-based approaches, which complicate their applications. In order to facilitate the implementation of state space models in practice, we present a state space model for cumulative payments in the framework of a scalar-based approach. In addition to a comprehensive presentation of this scalar state space model, some empirical applications and comparisons with popular stochastic claims reserving methods are performed, which show the strengths of the scalar state space model in practical applications. This model is a robustified extension of the well-known Chain Ladder method under the assumption, that the observations in the upper triangle are based on unobservable states. Using Kalman-filter recursions for prediction, filtering and smoothing of cumulative payments, the entire unobservable lower and upper run-off triangles can be determined. Moreover, the model provides an easy way to find and smooth outliers and to interpolate gaps in the data. Thus, the problem of missing values in the upper triangle is also solved in a natural way.  相似文献   

11.
We propose a Bayesian model to quantify the uncertainty associated with the payments per claim incurred (PPCI) algorithm. Based on the PPCI algorithm, two submodels are proposed for the number of reported claims run-off triangle and the PPCI run-off triangle, respectively. The model for the claims amount is then derived from the two submodels under the assumption of independence between the number of incurred claims and the PPCI. The joint likelihood of the number of reported claims and claims amount is derived. The posterior distribution of parameters is estimated via the Hamiltonian Monte Carlo (HMC) sampling approach. The Bayesian estimator, the process variance, the estimation variance, and the predictive distribution of unpaid claims are also studied. The proposed model and the HMC inference engine are applied to to an empirical claims dataset of the WorkSafe Victoria to estimate the unpaid claims of the doctor benefit. The Bayesian modeling procedure is further refined by including a preliminary generalized linear model analysis. The results are compared with those in a PwC report. An alternative model is compared with the proposed model based on various information criteria.  相似文献   

12.
Abstract

In a non-life insurance business an insurer often needs to build up a reserve to able to meet his or her future obligations arising from incurred but not reported completely claims. To forecast these claims reserves, a simple but generally accepted algorithm is the classical chain-ladder method. Recent research essentially focused on the underlying model for the claims reserves to come to appropriate bounds for the estimates of future claims reserves. Our research concentrates on scenarios with outlying data. On closer examination it is demonstrated that the forecasts for future claims reserves are very dependent on outlying observations. The paper focuses on two approaches to robustify the chain-ladder method: the first method detects and adjusts the outlying values, whereas the second method is based on a robust generalized linear model technique. In this way insurers will be able to find a reserve that is similar to the reserve they would have found if the data contained no outliers. Because the robust method flags the outliers, it is possible to examine these observations for further examination. For obtaining the corresponding standard errors the bootstrapping technique is applied. The robust chain-ladder method is applied to several run-off triangles with and without outliers, showing its excellent performance.  相似文献   

13.
Tax loss carryforwards (TLC) are a valuable asset because they can potentially reduce a company’s future tax payments. However, there is often a great deal of uncertainty regarding the probability and timing of these tax savings. We propose a contingent-claim model to value this asset. The value is determined primarily by the size of accumulated carryforwards relative to earnings. We show that, for poorly performing firms with large TLC, (1) the realizable (or fair) value of the tax losses can be significantly smaller than the book value, and (2) the tax losses can account for a significant fraction of the company’s equity value. The model is illustrated by calibrating it to a couple of companies with large carryforwards. Finally, we show how the model can be used to compute the marginal tax rate of a company with carryforwards.  相似文献   

14.
This paper suggests formulas able to capture potential strong connection among credit losses in downturns without assuming any specific distribution for the variables involved. We first show that the current model adopted by regulators (Basel) is equivalent to a conditional distribution derived from the Gaussian Copula (which does not identify tail dependence). We then use conditional distributions derived from copulas that express tail dependence (stronger dependence across higher losses) to estimate the probability of credit losses in extreme scenarios (crises). Next, we use data on historical credit losses incurred in American banks to compare the suggested approach to the Basel formula with respect to their performance when predicting the extreme losses observed in 2009 and 2010. Our results indicate that, in general, the copula approach outperforms the Basel method in two of the three credit segments investigated. The proposed method is extendable to other differentiable copula families and this gives flexibility to future practical applications of the model.  相似文献   

15.
In this paper, we introduce the use of interacting particle systems in the computation of probabilities of simultaneous defaults in large credit portfolios. The method can be applied to compute small historical as well as risk-neutral probabilities. It only requires that the model be based on a background Markov chain for which a simulation algorithm is available. We use the strategy developed by Del Moral and Garnier in (Ann. Appl. Probab. 15:2496–2534, 2005) for the estimation of random walk rare events probabilities. For the purpose of illustration, we consider a discrete-time version of a first passage model for default. We use a structural model with stochastic volatility, and we demonstrate the efficiency of our method in situations where importance sampling is not possible or numerically unstable.   相似文献   

16.
The contour maps of the error of historical and parametric estimates of the global minimum risk for large random portfolios optimized under the Expected Shortfall (ES) risk measure are constructed. Similar maps for the VaR of the ES-optimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the out-of-sample and in-sample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.  相似文献   

17.
In this study we compare the predictive ability of loan loss provisions with respect to actual losses under IFRS and local GAAP. The ‘incurred loss model’ of IAS 39 is a model that requires a relatively low level of judgment by preparers compared to alternative models that exist under local GAAP. We find that loan loss provisions in IFRS bank years predict future credit losses to a lesser extent than in local GAAP bank years, consistent with the incurred loss model reducing the timeliness of provisions. We also examine the interaction of standards with enforcement of financial reporting and with preparer incentives. In testing the role of enforcement from, e.g., banking supervisory authorities, we find that the benefits of local GAAP are largely limited to high-enforcement settings. Local GAAP also performs relatively better than IFRS in large and in profitable banks. This has implications for the IASB and the FASB as they prescribe the adoption of the more judgment-based expected loss model in IFRS 9 and the corresponding US GAAP standard (ASC topic 326), as well as for supervisory authorities that will enforce these standards.  相似文献   

18.
Catastrophe bonds, also known as CAT bonds, are insurance-linked securities that help to transfer catastrophe risks from insurance industry to bond holders. When the aggregate catastrophe loss exceeds a specified amount by the maturity, the CAT bond is triggered and the future bond payments are reduced. This article first presents a general pricing formula for a CAT bond with coupon payments, which can be adapted to various assumptions for a catastrophe loss process. Next, it gives formulas for the optimal write-down coefficients in a percentage, implemented by Monte Carlo simulations, which maximize two measurements of risk reduction, hedge effectiveness rate (HER) and hedge effectiveness (HE), respectively, and examines how the optimal write-down coefficients in a percentage help reinsurance or insurance companies to mitigate extreme catastrophe losses. Last, it demonstrates how the number of coupon payments, loss share, retention level, strike price, maturity, frequency, and severity parameters of the catastrophe loss process and different interest rate models affect the optimal write-down coefficients in a percentage with numerical examples for illustrations.  相似文献   

19.
We investigate how provisioning models interact with bank regulation to affect banks' risk-taking behavior. We study an accuracy versus timeliness trade-off between an incurred loss model (IL) and an expected loss model (EL) such as current expected credit loss model or International Financial Reporting Standards 9. Relative to IL, even though EL improves efficiency by prompting earlier corrective action in bad times, it induces banks to originate either safer or riskier loans. Trading off ex post benefits versus ex ante real effects, we show that more timely information under EL enhances efficiency either when banks are insufficiently capitalized or when regulatory intervention is likely to be effective. Conversely, when banks are moderately capitalized and regulatory intervention is sufficiently costly, switching to EL impairs efficiency. From a policy perspective, our analysis highlights the roles that regulatory capital and the effectiveness of regulatory intervention play in determining the economic consequences of provisioning models. EL spurs credit supply and improves financial stability in economies where intervening in banks' operations is relatively frictionless and/or regulators can tailor regulatory capital to incorporate information about credit losses.  相似文献   

20.
In this paper we introduce a new approach to the calculation of claims reserves (known and IBNR cases) which is particularly adapted to the business model of legal expense insurance. An essential aspect here is the split into two model components: case numbers and average claim costs. In contrast to other reserving methods for case numbers and claims cash flows which are frequently used in practice without checking the validity for application we introduce a model in which the time until case settlement is described by a lifetime distribution according to the principles of life insurance. The split of model components also allows for a simple implementation of cost inflation effects which is required by German law. Finally, the approach proposed here can readily be transferred to the calculation of IBNR reserves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号