首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In the literature, one of the main objects of stochastic claims reserving is to find models underlying the chain-ladder method in order to analyze the variability of the outstanding claims, either analytically or by bootstrapping. In bootstrapping these models are used to find a full predictive distribution of the claims reserve, even though there is a long tradition of actuaries calculating the reserve estimate according to more complex algorithms than the chain-ladder, without explicit reference to an underlying model. In this paper we investigate existing bootstrap techniques and suggest two alternative bootstrap procedures, one non-parametric and one parametric, by which the predictive distribution of the claims reserve can be found for other age-to-age development factor methods than the chain-ladder, using some rather mild model assumptions. For illustration, the procedures are applied to three different development triangles.  相似文献   

2.
Abstract

In a non-life insurance business an insurer often needs to build up a reserve to able to meet his or her future obligations arising from incurred but not reported completely claims. To forecast these claims reserves, a simple but generally accepted algorithm is the classical chain-ladder method. Recent research essentially focused on the underlying model for the claims reserves to come to appropriate bounds for the estimates of future claims reserves. Our research concentrates on scenarios with outlying data. On closer examination it is demonstrated that the forecasts for future claims reserves are very dependent on outlying observations. The paper focuses on two approaches to robustify the chain-ladder method: the first method detects and adjusts the outlying values, whereas the second method is based on a robust generalized linear model technique. In this way insurers will be able to find a reserve that is similar to the reserve they would have found if the data contained no outliers. Because the robust method flags the outliers, it is possible to examine these observations for further examination. For obtaining the corresponding standard errors the bootstrapping technique is applied. The robust chain-ladder method is applied to several run-off triangles with and without outliers, showing its excellent performance.  相似文献   

3.
We introduce a continuous-time framework for the prediction of outstanding liabilities, in which chain-ladder development factors arise as a histogram estimator of a cost-weighted hazard function running in reversed development time. We use this formulation to show that under our assumptions on the individual data chain-ladder is consistent. Consistency is understood in the sense that both the number of observed claims grows to infinity and the level of aggregation tends to zero. We propose alternatives to chain-ladder development factors by replacing the histogram estimator with kernel smoothers and by estimating a cost-weighted density instead of a cost-weighted hazard. Finally, we provide a real-data example and a simulation study confirming the strengths of the proposed alternatives.  相似文献   

4.
为度量未决赔款准备金评估结果的波动性,需要研究随机性评估方法。基于GLM的随机性方法,得到准备金估计及预测均方误差。特别地,在过度分散泊松模型中,分别应用参数Bootstrap方法和非参数Bootstrap方法,得到两种方法下未决赔款准备金的预测分布,进而由该分布得到各个分位数以及其它分布度量,并通过精算实务中的数值实例应用R软件加以实证分析。实证结果表明,两种Bootstrap方法得到的参数误差、过程标准差、预测均方误差都与解析表示估计的结果很接近。  相似文献   

5.
Renshaw and Verrall [] specified the generalized linear model (GLM) underlying the chain-ladder technique and suggested some other GLMs which might be useful in claims reserving. The purpose of this paper is to construct bounds for the discounted loss reserve within the framework of GLMs. Exact calculation of the distribution of the total reserve is not feasible, and hence the determination of lower and upper bounds with a simpler structure is a possible way out. The paper ends with numerical examples illustrating the usefulness of the presented approximations.  相似文献   

6.
ABSTRACT

We propose an asymptotic theory for distribution forecasting from the log-normal chain-ladder model. The theory overcomes the difficulty of convoluting log-normal variables and takes estimation error into account. The results differ from that of the over-dispersed Poisson model and from the chain-ladder-based bootstrap. We embed the log-normal chain-ladder model in a class of infinitely divisible distributions called the generalized log-normal chain-ladder model. The asymptotic theory uses small σ asymptotics where the dimension of the reserving triangle is kept fixed while the standard deviation is assumed to decrease. The resulting asymptotic forecast distributions follow t distributions. The theory is supported by simulations and an empirical application.  相似文献   

7.
In this paper, we investigate the association between the efficiency of infrastructure provision and the level of corruption, in the province in which the infrastructure takes place, employing a large dataset on Italian public works contracts. We, first, estimate efficiency in public contracts’ execution using a smoothed DEA bootstrap procedure that ensures consistency of our estimates. Then, we evaluate the effects of corruption using a semi-parametric technique that produces a robust inference for an unknown serial correlation between efficiency scores. In order to test the robustness of our results, the parametric stochastic frontier approach has also been employed. The results from both nonparametric and parametric techniques show that greater corruption, in the area where the infrastructure provision is localised, is associated with lower efficiency in public contracts execution.  相似文献   

8.
In the present paper, we consider a portfolio of risks consisting of two subportfolios, and we study the problem of whether or not the predictors based on the subportfolios are consistent with those based on the full portfolio. We study this aggregation problem for both the chain-ladder method and the additive method (or incremental loss ratio method). In the case of the chain-ladder method we extend results of Ajne and Klemmt, using the duality of the chain-ladder method applied to incremental losses; we also give a short proof for this duality, which was first observed by Barnett, Zehnwirth & Dubossarky. In the case of the additive method the aggregation problem has not been studied before and its solution is surprisingly simple.  相似文献   

9.
We review developments in conducting inference for model parameters in the presence of intertemporal and cross‐sectional dependence with an emphasis on panel data applications. We review the use of heteroskedasticity and autocorrelation consistent (HAC) standard error estimators, which include the standard clustered and multiway clustered estimators, and discuss alternative sample‐splitting inference procedures, such as the Fama–Macbeth procedure, within this context. We outline pros and cons of the different procedures. We then illustrate the properties of the discussed procedures within a simulation experiment designed to mimic the type of firm‐level panel data that might be encountered in accounting and finance applications. Our conclusion, based on theoretical properties and simulation performance, is that sample‐splitting procedures with suitably chosen splits are the most likely to deliver robust inferential statements with approximately correct coverage properties in the types of large, heterogeneous panels many researchers are likely to face.  相似文献   

10.
In non-life insurance, the provision for outstanding claims (the claims reserve) should include future loss adjustment expenses, i.e. administrative expenses to settle the claims, and therefore we have to estimate the expected Unallocated Loss Adjustment Expenses (ULAE) – expenses that are not attributable to individual claims, such as salaries at the claims handling department. The ULAE reserve has received little attention from European actuaries in the literature, supposedly because of the lack of detailed data for estimation and evaluation. Having good estimation procedures will, however, become even more important with the introduction of the Solvency II regulations, which require unbiased estimation of future cash flows for all expenses. We present a model for ULAE at the individual claim level that includes both fixed and variable costs. This model leads to an estimate of the ULAE reserve at the aggregate (line-of-business) level, as demonstrated in a numerical example from a Swedish non-life insurer.  相似文献   

11.
If one is interested in managing fraud, one must measure the fraud rate to be able to assess the degree of the problem and the effectiveness of the fraud management technique. This article offers a robust new method for estimating fraud rate, PRIDIT‐FRE (PRIDIT‐based Fraud Rate Estimation), developed based on PRIDIT, an unsupervised fraud detection method to assess individual claim fraud suspiciousness. PRIDIT‐FRE presents the first nonparametric unsupervised estimator of the actual rate of fraud in a population of claims, robust to the bias contained in an audited sample (arising from the quality or individual hubris of an auditor or investigator, or the natural data‐gathering process through claims adjusting). PRIDIT‐FRE exploits the internal consistency of fraud predictors and makes use of a small audited sample or an unaudited sample only. Using two insurance fraud data sets with different characteristics, we illustrate the effectiveness of PRIDIT‐FRE and examine its robustness in varying scenarios.  相似文献   

12.
It has become standard practice in the fund performance evaluation literature to use the bootstrap approach to distinguish “skills” from “luck”, while its reliability has not been subject to rigorous statistical analysis. This paper reviews and critiques the bootstrap schemes used in the literature, and provides a simulation analysis of the validity and reliability of the bootstrap approach by applying it to evaluating the performance of hypothetical funds under various assumptions. We argue that this approach can be misleading, regardless of using alpha estimates or their t‐statistics. While alternative bootstrap schemes can result in improvements, they are not foolproof either. The case can be worse if the benchmark model is misspecified. It is therefore only with caution that we can use the bootstrap approach to evaluate the performance of funds and we offer some suggestions for improving it.  相似文献   

13.
Insurance claims data usually contain a large number of zeros and exhibits fat-tail behavior. Misestimation of one end of the tail impacts the other end of the tail of the claims distribution and can affect both the adequacy of premiums and needed reserves to hold. In addition, insured policyholders in a portfolio are naturally non-homogeneous. It is an ongoing challenge for actuaries to be able to build a predictive model that will simultaneously capture these peculiar characteristics of claims data and policyholder heterogeneity. Such models can help make improved predictions and thereby ease the decision-making process. This article proposes the use of spliced regression models for fitting insurance loss data. A primary advantage of spliced distributions is their flexibility to accommodate modeling different segments of the claims distribution with different parametric models. The threshold that breaks the segments is assumed to be a parameter, and this presents an additional challenge in the estimation. Our simulation study demonstrates the effectiveness of using multistage optimization for likelihood inference and at the same time the repercussions of model misspecification. For purposes of illustration, we consider three-component spliced regression models: the first component contains zeros, the second component models the middle segment of the loss data, and the third component models the tail segment of the loss data. We calibrate these proposed models and evaluate their performance using a Singapore auto insurance claims dataset. The estimation results show that the spliced regression model performs better than the Tweedie regression model in terms of tail fitting and prediction accuracy.  相似文献   

14.
Previous research has reported that analysts’ forecasts of company profits are both optimistically biased and inefficient. However, many prior studies have applied ordinary least-squares regression to data where heteroskedasticity and non-normality are common problems, potentially resulting in misleading inferences. Furthermore, most prior studies deflate earnings and forecasts in an attempt to correct for non-constant error variances, often changing the specification of the underlying regression equation. We describe and employ the wild bootstrap—a technique that is robust both to heteroskedasticity and non-normality—to assess the reliability of prior studies of analysts’ forecasts. Based on a large sample of 23,283 firm years covering the period 1981–2002, our main results confirm the findings of prior research. Our results also suggest that deflation may not be a successful method of correcting for heteroskedasticity, providing a strong rationale for using the wild bootstrap in future work in this, and other areas of accounting and finance research.  相似文献   

15.
Financial advisors commonly recommend that the investment horizon should be rather long in order to benefit from the ‘time diversification’. In this case, in order to choose the optimal portfolio, it is necessary to estimate the risk and reward of several alternative portfolios over a long-run given a sample of observations over a short-run. Two interrelated obstacles in these estimations are lack of sufficient data and the uncertainty in the nature of the return generating process. To overcome these obstacles researchers rely heavily on block bootstrap methods. In this paper we demonstrate that the estimates provided by a block bootstrap method are generally biased and we propose two methods of bias reduction. We show that an improper use of a block bootstrap method usually causes underestimation of the risk of a portfolio whose returns are independent over time and overestimation of the risk of a portfolio whose returns are mean-reverting.  相似文献   

16.
This paper provides a method for testing for regime differences when regimes are long-lasting. Standard testing procedures are generally inappropriate because regime persistence causes a spurious regression problem – a problem that has led to incorrect inference in a broad range of studies involving regimes representing political, business, and seasonal cycles. The paper outlines analytically how standard estimators can be adjusted for regime dummy variable persistence. While the adjustments are helpful asymptotically, spurious regression remains a problem in small samples and must be addressed using simulation or bootstrap procedures. We provide a simulation procedure for testing hypotheses in situations where an independent variable in a time-series regression is a persistent regime dummy variable. We also develop a procedure for testing hypotheses in situations where the dependent variable has similar properties.  相似文献   

17.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

18.
This article proposes using credibility theory in the context of stochastic claims reserving. We consider the situation where an insurer has access to the claims experience of its peer competitors and has the potential to improve prediction of outstanding liabilities by incorporating information from other insurers. Based on the framework of Bayesian linear models, we show that the development factor in the classical chain-ladder setting has a credibility expression: a weighted average of the prior mean and the best estimate from the data. In the empirical analysis, we examine loss triangles for the line of commercial auto insurance from a portfolio of insurers in the United States. We employ hierarchical model for the specification of prior and show that prediction could be improved through borrowing strength among insurers based on a hold-out sample validation.  相似文献   

19.
Applied researchers often test for the difference of the Sharpe ratios of two investment strategies. A very popular tool to this end is the test of Jobson and Korkie [Jobson, J.D. and Korkie, B.M. (1981). Performance hypothesis testing with the Sharpe and Treynor measures. Journal of Finance, 36:889–908], which has been corrected by Memmel [Memmel, C. (2003). Performance hypothesis testing with the Sharpe ratio. Finance Letters, 1:21–23]. Unfortunately, this test is not valid when returns have tails heavier than the normal distribution or are of time series nature. Instead, we propose the use of robust inference methods. In particular, we suggest to construct a studentized time series bootstrap confidence interval for the difference of the Sharpe ratios and to declare the two ratios different if zero is not contained in the obtained interval. This approach has the advantage that one can simply resample from the observed data as opposed to some null-restricted data. A simulation study demonstrates the improved finite sample performance compared to existing methods. In addition, two applications to real data are provided.  相似文献   

20.
Data insufficiency and reporting threshold are two main issues in operational risk modelling. When these conditions are present, maximum likelihood estimation (MLE) may produce very poor parameter estimates. In this study, we first investigate four methods to estimate the parameters of truncated distributions for small samples—MLE, expectation-maximization algorithm, penalized likelihood estimators, and Bayesian methods. Without any proper prior information, Jeffreys’ prior for truncated distributions is used. Based on a simulation study for the log-normal distribution, we find that the Bayesian method gives much more credible and reliable estimates than the MLE method. Finally, an application to the operational loss severity estimation using real data is conducted using the truncated log-normal and log-gamma distributions. With the Bayesian method, the loss distribution parameters and value-at-risk measure for every cell with loss data can be estimated separately for internal and external data. Moreover, confidence intervals for the Bayesian estimates are obtained via a bootstrap method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号