首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article proposes a new loss reserving approach, inspired from the collective model of risk theory. According to the collective paradigm, we do not relate payments to specific claims or policies, but we work within a frequency-severity setting, with a number of payments in every cell of the run-off triangle, together with the corresponding paid amounts. Compared to the Tweedie reserving model, which can be seen as a compound sum with Poisson-distributed number of terms and Gamma-distributed summands, we allow here for more general severity distributions, typically mixture models combining a light-tailed component with a heavier-tailed one, including inflation effects. The severity model is fitted to individual observations and not to aggregated data displayed in run-off triangles with a single value in every cell. In that respect, the modeling approach appears to be a powerful alternative to both the crude traditional aggregated approach based on triangles and the extremely detailed individual reserving approach developing each and every claim separately. A case study based on a motor third-party liability insurance portfolio observed over 2004–2014 is used to illustrate the relevance of the proposed approach.  相似文献   

2.
Applications of state space models and the Kalman filter are comparatively underrepresented in stochastic claims reserving. This is usually caused by their high complexity due to matrix-based approaches, which complicate their applications. In order to facilitate the implementation of state space models in practice, we present a state space model for cumulative payments in the framework of a scalar-based approach. In addition to a comprehensive presentation of this scalar state space model, some empirical applications and comparisons with popular stochastic claims reserving methods are performed, which show the strengths of the scalar state space model in practical applications. This model is a robustified extension of the well-known Chain Ladder method under the assumption, that the observations in the upper triangle are based on unobservable states. Using Kalman-filter recursions for prediction, filtering and smoothing of cumulative payments, the entire unobservable lower and upper run-off triangles can be determined. Moreover, the model provides an easy way to find and smooth outliers and to interpolate gaps in the data. Thus, the problem of missing values in the upper triangle is also solved in a natural way.  相似文献   

3.
In the present paper we analyse how the estimators from Merz u. Wüthrich (2007) could be generalised to the case of N correlated run-off triangles. The simultaneous view on N correlated subportfolios is motivated by the fact, that in practice a run-off portfolio often has to be divided in subportfolios, so that the homogeneity assumption of the claims reserving method on each subportfolio is satisfied. We derive an explicit formula for the process-variance, the estimation-error and the prediction error made by the forecast for the claims development result with the Chain-Ladder method. We illustrate the results by an example.  相似文献   

4.
In certain segments, IBNR calculations on paid triangles are more stable than on incurred triangles. However, calculations on payments often do not adequately take large losses into account. An IBNR method which separates large and attritional losses and thus allows to use payments for the attritional and incurred amounts for the large losses has been introduced by Riegel (see Riegel, U. (2014). A bifurcation approach for attritional and large losses in chain ladder calculations. Astin Bulletin 44, 127–172). The method corresponds to a stochastic model that is based on Mack’s chain ladder model. In this paper, we analyse a quasi-additive version of this model, i.e. a version which is in essence based on the assumptions of the additive (or incremental loss ratio) method. We describe the corresponding IBNR method and derive formulas for the mean squared error of prediction.  相似文献   

5.
Abstract

Traditional claims-reserving techniques are based on so-called run-off triangles containing aggregate claim figures. Such a triangle provides a summary of an underlying data set with individual claim figures. This contribution explores the interpretation of the available individual data in the framework of longitudinal data analysis. Making use of the theory of linear mixed models, a flexible model for loss reserving is built. Whereas traditional claims-reserving techniques don’t lead directly to predictions for individual claims, the mixed model enables such predictions on a sound statistical basis with, for example, confidence regions. Both a likelihood-based as well as a Bayesian approach are considered. In the frequentist approach, expressions for the mean squared error of prediction of an individual claim reserve, origin year reserves, and the total reserve are derived. Using MCMC techniques, the Bayesian approach allows simulation from the complete predictive distribution of the reserves and the calculation of various risk measures. The paper ends with an illustration of the suggested techniques on a data set from practice, consisting of Belgian automotive third-party liability claims. The results for the mixed-model analysis are compared with those obtained from traditional claims-reserving techniques for run-off triangles. For the data under consideration, the lognormal mixed model fits the observed individual data well. It leads to individual predictions comparable to those obtained by applying chain-ladder development factors to individual data. Concerning the predictive power on the aggregate level, the mixed model leads to reasonable predictions and performs comparable to and often better than the stochastic chain ladder for aggregate data.  相似文献   

6.
We connect classical chain ladder to granular reserving. This is done by defining explicitly how the classical run-off triangles are generated from individual iid observations in continuous time. One important result is that the development factors have a one to one correspondence to a histogram estimator of a hazard running in reversed development time. A second result is that chain ladder has a systematic bias if the row effect has not the same distribution when conditioned on any of the aggregated periods. This means that the chain ladder assumptions on one level of aggregation, say yearly, are different from the chain ladder assumptions when aggregated in quarters and the optimal level of aggregation is a classical bias variance trade-off depending on the data-set. We introduce smooth development factors arising from non-parametric hazard kernel smoother improving the estimation significantly.  相似文献   

7.
Abstract

The correlation among multiple lines of business plays an important role in quantifying the uncertainty of loss reserves for insurance portfolios. To accommodate correlation, most multivariate loss-reserving methods focus on the pairwise association between corresponding cells in multiple run-off triangles. However, such practice usually relies on the independence assumption across accident years and ignores the calendar year effects that could affect all open claims simultaneously and induce dependencies among loss triangles. To address this issue, we study a Bayesian log-normal model in the prediction of outstanding claims for dependent lines of business. In addition to the pairwise correlation, our method allows for an explicit examination of the correlation due to common calendar year effects. Further, different specifications of the calendar year trend are considered to reflect valuation actuaries’ prior knowledge of claim development. In a case study, we analyze an insurance portfolio of personal and commercial auto lines from a major U.S. property-casualty insurer. It is shown that the incorporation of calendar year effects improves model fit significantly, though it contributes substantively to the predictive variability. The availability of the realizations of predicted claims permits us to perform a retrospective test, which suggests that extra prediction uncertainty is indispensable in modern risk management practices.  相似文献   

8.
The vast literature on stochastic loss reserving concentrates on data aggregated in run-off triangles. However, a triangle is a summary of an underlying data-set with the development of individual claims. We refer to this data-set as ‘micro-level’ data. Using the framework of Position Dependent Marked Poisson Processes) and statistical tools for recurrent events, a data-set is analyzed with liability claims from a European insurance company. We use detailed information of the time of occurrence of the claim, the delay between occurrence and reporting to the insurance company, the occurrences of payments and their sizes, and the final settlement. Our specifications are (semi)parametric and our approach is likelihood based. We calibrate our model to historical data and use it to project the future development of open claims. An out-of-sample prediction exercise shows that we obtain detailed and valuable reserve calculations. For the case study developed in this paper, the micro-level model outperforms the results obtained with traditional loss reserving methods for aggregate data.  相似文献   

9.
Insurers are faced with the challenge of estimating the future reserves needed to handle historic and outstanding claims that are not fully settled. A well-known and widely used technique is the chain-ladder method, which is a deterministic algorithm. To include a stochastic component one may apply generalized linear models to the run-off triangles based on past claims data. Analytical expressions for the standard deviation of the resulting reserve estimates are typically difficult to derive. A popular alternative approach to obtain inference is to use the bootstrap technique. However, the standard procedures are very sensitive to the possible presence of outliers. These atypical observations, deviating from the pattern of the majority of the data, may both inflate or deflate traditional reserve estimates and corresponding inference such as their standard errors. Even when paired with a robust chain-ladder method, classical bootstrap inference may break down. Therefore, we discuss and implement several robust bootstrap procedures in the claims reserving framework and we investigate and compare their performance on both simulated and real data. We also illustrate their use for obtaining the distribution of one year risk measures.  相似文献   

10.
Abstract

In a non-life insurance business an insurer often needs to build up a reserve to able to meet his or her future obligations arising from incurred but not reported completely claims. To forecast these claims reserves, a simple but generally accepted algorithm is the classical chain-ladder method. Recent research essentially focused on the underlying model for the claims reserves to come to appropriate bounds for the estimates of future claims reserves. Our research concentrates on scenarios with outlying data. On closer examination it is demonstrated that the forecasts for future claims reserves are very dependent on outlying observations. The paper focuses on two approaches to robustify the chain-ladder method: the first method detects and adjusts the outlying values, whereas the second method is based on a robust generalized linear model technique. In this way insurers will be able to find a reserve that is similar to the reserve they would have found if the data contained no outliers. Because the robust method flags the outliers, it is possible to examine these observations for further examination. For obtaining the corresponding standard errors the bootstrapping technique is applied. The robust chain-ladder method is applied to several run-off triangles with and without outliers, showing its excellent performance.  相似文献   

11.
Abstract

We consider the three-factor double mean reverting (DMR) option pricing model of Gatheral [Consistent Modelling of SPX and VIX Options, 2008], a model which can be successfully calibrated to both VIX options and SPX options simultaneously. One drawback of this model is that calibration may be slow because no closed form solution for European options exists. In this paper, we apply modified versions of the second-order Monte Carlo scheme of Ninomiya and Victoir [Appl. Math. Finance, 2008, 15, 107–121], and compare these to the Euler–Maruyama scheme with full truncation of Lord et al. [Quant. Finance, 2010, 10(2), 177–194], demonstrating on the one hand that fast calibration of the DMR model is practical, and on the other that suitably modified Ninomiya–Victoir schemes are applicable to the simulation of much more complicated time-homogeneous models than may have been thought previously.  相似文献   

12.
Index tracking aims at replicating a given benchmark with a smaller number of its constituents. Different quantitative models can be set up to determine the optimal index replicating portfolio. In this paper, we propose an alternative based on imposing a constraint on the q-norm (0?<?q?<?1) of the replicating portfolios’ asset weights: the q-norm constraint regularises the problem and identifies a sparse model. Both approaches are challenging from an optimization viewpoint due to either the presence of the cardinality constraint or a non-convex constraint on the q-norm. The problem can become even more complex when non-convex distance measures or other real-world constraints are considered. We employ a hybrid heuristic as a flexible tool to tackle both optimization problems. The empirical analysis of real-world financial data allows us to compare the two index tracking approaches. Moreover, we propose a strategy to determine the optimal number of constituents and the corresponding optimal portfolio asset weights.  相似文献   

13.
In the literature, one of the main objects of stochastic claims reserving is to find models underlying the chain-ladder method in order to analyze the variability of the outstanding claims, either analytically or by bootstrapping. In bootstrapping these models are used to find a full predictive distribution of the claims reserve, even though there is a long tradition of actuaries calculating the reserve estimate according to more complex algorithms than the chain-ladder, without explicit reference to an underlying model. In this paper we investigate existing bootstrap techniques and suggest two alternative bootstrap procedures, one non-parametric and one parametric, by which the predictive distribution of the claims reserve can be found for other age-to-age development factor methods than the chain-ladder, using some rather mild model assumptions. For illustration, the procedures are applied to three different development triangles.  相似文献   

14.
This note provides a method to convert the dynamic models in Cysne [Cysne, Rubens P., 2006. A note on the non-convexity problem in some shopping-time and human-capital models. Journal of Banking and Finance 30 (10), 2737–2745] and in Cysne [Cysne, Rubens P., 2008. A note on “inflation and welfare”. Journal of Banking and Finance 32 (9), 1984–1987] to concave optimization problems. We do this by introducing new control and state variables in the models. Cysne (2006, 2008) restrict attention to continuous time models and derive parametric conditions to use Arrow’s sufficiency theorem. When the sufficient conditions presented in Cysne (2006) are satisfied (but not under the sharper sufficient conditions presented in Cysne (2008)) we can rewrite these models as concave optimization problems even if time is discrete.  相似文献   

15.
In commercial banking, various statistical models for corporate credit rating have been theoretically promoted and applied to bank-specific credit portfolios. In this paper, we empirically compare and test the performance of a wide range of parametric and nonparametric credit rating model approaches in a statistically coherent way, based on a ‘real-world’ data set. We repetitively (k times) split a large sample of industrial firms’ default data into disjoint training and validation subsamples. For all model types, we estimate k out-of-sample discriminatory power measures, allowing us to compare the models coherently. We observe that more complex and nonparametric approaches, such as random forest, neural networks, and generalized additive models, perform best in-sample. However, comparing k out-of-sample cross-validation results, these models overfit and lose some of their predictive power. Rather than improving discriminatory power, we perceive their major contribution to be their usefulness as diagnostic tools for the selection of rating factors and the development of simpler, parametric models.
Stefan DenzlerEmail:
  相似文献   

16.
We present an approach for modelling dependencies in exponential Lévy market models with arbitrary margins originated from time changed Brownian motions. Using weak subordination of Buchmann et al. [Bernoulli, 2017], we face a new layer of dependencies, superior to traditional approaches based on pathwise subordination, since weakly subordinated processes are not required to have independent components considering multivariate stochastic time changes. We apply a subordinator being able to incorporate any joint or idiosyncratic information arrivals. We emphasize multivariate variance gamma and normal inverse Gaussian processes and state explicit formulae for the Lévy characteristics. Using maximum likelihood, we estimate multivariate variance gamma models on various market data and show that these models are highly preferable to traditional approaches. Consistent values of basket-options under given marginal pricing models are achieved using the Esscher transform, generating a non-flat implied correlation surface.  相似文献   

17.
The dichotomy between timing ability and the ability to select individual assets has been widely used in discussing investment performance measurement. This paper discusses the conceptual and econometric problems associated with defining and measuring timing and selectivity. In defining these notions we attempt to capture their intuitive interpretation. We offer two basic modeling approaches, which we term the portfolio approach and the factor approach. We show how the quality of timing and selectivity information can be identified statistically in a number of simple models, and discuss some of the econometric issues associated with these models. In particular, a simple quadratic regression is shown to be valid in measuring timing information.  相似文献   

18.
In this paper, we are interested in predicting multiple period Value at Risk and Expected Shortfall based on the so-called iterating approach. In general, the properties of the conditional distribution of multiple period returns do not follow easily from the one-period data generating process, rendering this a non-trivial task. We outline a framework that forms the basis for setting approximations and study four different approaches. Their performance is evaluated by means of extensive Monte Carlo simulations based on an asymmetric GARCH model, implying conditional skewness and excess kurtosis in the multiple period returns. This simulation-based approach was the best one, closely followed by that of assuming a skewed t-distribution for the multiple period returns. The approach based on a Gram–Charlier expansion was not able to cope with the implied non-normality, while the so-called Root-k approach performed poorly. In addition, we outline how the delta-method may be used to quantify the estimation error in the predictors and in the Monte Carlo study we found that it performed well. In an empirical illustration, we computed 10-day Value at Risk’s and Expected Shortfall for Brent Crude Oil, the EUR/USD exchange rate and the S&P 500 index. The Root-k approach clearly performed the worst and the other approaches performed quite similarly, with the simulation based approach and the one based on the skewed t-distribution somewhat better than the one based on the Gram–Charlier expansion.  相似文献   

19.
The literature has shown that the volatility of stock and forex rate market returns shows the characteristic of long memory. Another fact that is shown in the literature is that this feature may be spurious and volatility actually consists of a short memory process contaminated with random level shifts (RLS). In this paper, we follow recent econometric approaches estimating an RLS model to the logarithm of the absolute value of stock and forex returns. The model consists of the sum of a short-term memory component and a component of level shifts. The second component is specified as the cumulative sum of a process that is zero with probability ‘1-alpha’ and is a random variable with probability ‘alpha’. The results show that there are level shifts that are rare, but once they are taken into account, the characteristic or property of long memory disappears. Also, the presence of General Autoregressive Conditional Heteroscedasticity (GARCH) effects is eliminated when included or deducted level shifts. An exercise of out-of-sample forecasting shows that the RLS model has better performance than traditional models for modelling long memory such as the models ARFIMA (p,d,q).  相似文献   

20.
The prediction of the outstanding loss liabilities for a non-life run-off portfolio as well as the quantification of the prediction error is one of the most important actuarial tasks in non-life insurance. In this paper we consider this prediction problem in a multivariate context. More precisely, we derive the predictive distribution of the claims reserves simultaneously for several correlated run-off portfolios in the framework of the Chain-ladder claims reserving method for several correlated run-off portfolios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号