首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
A three-parameter generalization of the Pareto distribution is presented with density function having a flexible upper tail in modeling loss payment data. This generalized Pareto distribution will be referred to as the Odd Pareto distribution since it is derived by considering the distributions of the odds of the Pareto and inverse Pareto distributions. Basic properties of the Odd Pareto distribution (OP) are studied. Model parameters are estimated using both modified and regular maximum likelihood methods. Simulation studies are conducted to compare the OP with the exponentiated Pareto, Burr, and Kumaraswamy distributions using two different test statistics based on the ml method. Furthermore, two examples from the Norwegian fire insurance claims data-set are provided to illustrate the upper tail flexibility of the distribution. Extensions of the Odd Pareto distribution are also considered to improve the fitting of data.  相似文献   

2.
Not all claims are reported when a database for financial operational risk is created. The probability of reporting increases with the size of the operational risk loss, and converges towards one for big losses. Losses in operational risk have different causes, and usually follow a wide variety of distributional shapes. Therefore, a method for modelling operational risk based on one or two parametric models is deemed to fail. In this paper, we introduce a semi-parametric method for modelling operational risk that is capable of taking under-reporting into account and being guided by prior knowledge of the distributional shape.  相似文献   

3.
Motivated by the modelling of liquidity risk in fund management in a dynamic setting, we propose and investigate a class of time series models with generalized Pareto marginals: the autoregressive generalized Pareto process (ARGP), a modified ARGP and a thresholded ARGP. These models are able to capture key data features apparent in fund liquidity data and reflect the underlying phenomena via easily interpreted, low-dimensional model parameters. We establish stationarity and ergodicity, provide a link to the class of shot-noise processes, and determine the associated interarrival distributions for exceedances. Moreover, we provide estimators for all relevant model parameters and establish consistency and asymptotic normality for all estimators (except the threshold parameter, which is to be estimated in advance). Finally, we illustrate our approach using real-world fund redemption data, and we discuss the goodness-of-fit of the estimated models.  相似文献   

4.
Insurance claims data usually contain a large number of zeros and exhibits fat-tail behavior. Misestimation of one end of the tail impacts the other end of the tail of the claims distribution and can affect both the adequacy of premiums and needed reserves to hold. In addition, insured policyholders in a portfolio are naturally non-homogeneous. It is an ongoing challenge for actuaries to be able to build a predictive model that will simultaneously capture these peculiar characteristics of claims data and policyholder heterogeneity. Such models can help make improved predictions and thereby ease the decision-making process. This article proposes the use of spliced regression models for fitting insurance loss data. A primary advantage of spliced distributions is their flexibility to accommodate modeling different segments of the claims distribution with different parametric models. The threshold that breaks the segments is assumed to be a parameter, and this presents an additional challenge in the estimation. Our simulation study demonstrates the effectiveness of using multistage optimization for likelihood inference and at the same time the repercussions of model misspecification. For purposes of illustration, we consider three-component spliced regression models: the first component contains zeros, the second component models the middle segment of the loss data, and the third component models the tail segment of the loss data. We calibrate these proposed models and evaluate their performance using a Singapore auto insurance claims dataset. The estimation results show that the spliced regression model performs better than the Tweedie regression model in terms of tail fitting and prediction accuracy.  相似文献   

5.
A rich variety of probability distributions has been proposed in the actuarial literature for fitting of insurance loss data. Examples include: lognormal, log-t, various versions of Pareto, loglogistic, Weibull, gamma and its variants, and generalized beta of the second kind distributions, among others. In this paper, we supplement the literature by adding the log-folded-normal and log-folded-t families. Shapes of the density function and key distributional properties of the ‘folded’ distributions are presented along with three methods for the estimation of parameters: method of maximum likelihood; method of moments; and method of trimmed moments. Further, large and small-sample properties of these estimators are studied in detail. Finally, we fit the newly proposed distributions to data which represent the total damage done by 827 fires in Norway for the year 1988. The fitted models are then employed in a few quantitative risk management examples, where point and interval estimates for several value-at-risk measures are calculated.  相似文献   

6.
In this paper, we propose an alternative approach to estimate long-term risk. Instead of using the static square root of time method, we use a dynamic approach based on volatility forecasting by non-linear models. We explore the possibility of improving the estimations using different models and distributions. By comparing the estimations of two risk measures, value at risk and expected shortfall, with different models and innovations at short-, median- and long-term horizon, we find that the best model varies with the forecasting horizon and that the generalized Pareto distribution gives the most conservative estimations with all the models at all the horizons. The empirical results show that the square root method underestimates risk at long horizons and our approach is more competitive for risk estimation over a long term.  相似文献   

7.
Financial returns typically display heavy tails and some degree of skewness, and conditional variance models with these features often outperform more limited models. The difference in performance may be especially important in estimating quantities that depend on tail features, including risk measures such as the expected shortfall. Here, using recent generalizations of the asymmetric Student-t and exponential power distributions to allow separate parameters to control skewness and the thickness of each tail, we fit daily financial return volatility and forecast expected shortfall for the S&P 500 index and a number of individual company stocks; the generalized distributions are used for the standardized innovations in a nonlinear, asymmetric GARCH-type model. The results provide evidence for the usefulness of the general distributions in improving fit and prediction of downside market risk of financial assets. Constrained versions, corresponding with distributions used in the previous literature, are also estimated in order to provide a comparison of the performance of different conditional distributions.  相似文献   

8.
Abstract

This article investigates performance of interval estimators of various actuarial risk measures. We consider the following risk measures: proportional hazards transform (PHT), Wang transform (WT), value-at-risk (VaR), and conditional tail expectation (CTE). Confidence intervals for these measures are constructed by applying nonparametric approaches (empirical and bootstrap), the strict parametric approach (based on the maximum likelihood estimators), and robust parametric procedures (based on trimmed means).

Using Monte Carlo simulations, we compare the average lengths and proportions of coverage (of the true measure) of the intervals under two data-generating scenarios: “clean” data and “contaminated” data. In the “clean” case, data sets are generated by the following (similar shape) parametric families: exponential, Pareto, and lognormal. Parameters of these distributions are selected so that all three families are equally risky with respect to a fixed risk measure. In the “contaminated” case, the “clean” data sets from these distributions are mixed with a small fraction of unusual observations (outliers). It is found that approximate knowledge of the underlying distribution combined with a sufficiently robust estimator (designed for that distribution) yields intervals with satisfactory performance under both scenarios.  相似文献   

9.
Starting from well-known empirical stylized facts of financial time series, we develop dynamic portfolio protection trading strategies based on econometric methods. As a criterion for riskiness, we consider the evolution of the value-at-risk spread from a GARCH model with normal innovations relative to a GARCH model with generalized innovations. These generalized innovations may for example follow a Student t, a generalized hyperbolic, an alpha-stable or a Generalized Pareto distribution (GPD). Our results indicate that the GPD distribution provides the strongest signals for avoiding tail risks. This is not surprising as the GPD distribution arises as a limit of tail behaviour in extreme value theory and therefore is especially suited to deal with tail risks. Out-of-sample backtests on 11 years of DAX futures data, indicate that the dynamic tail-risk protection strategy effectively reduces the tail risk while outperforming traditional portfolio protection strategies. The results are further validated by calculating the statistical significance of the results obtained using bootstrap methods. A number of robustness tests including application to other assets further underline the effectiveness of the strategy. Finally, by empirically testing for second-order stochastic dominance, we find that risk averse investors would be willing to pay a positive premium to move from a static buy-and-hold investment in the DAX future to the tail-risk protection strategy.  相似文献   

10.
Abstract

Estimation of the tail index parameter of a single-parameter Pareto model has wide application in actuarial and other sciences. Here we examine various estimators from the standpoint of two competing criteria: efficiency and robustness against upper outliers. With the maximum likelihood estimator (MLE) being efficient but nonrobust, we desire alternative estimators that retain a relatively high degree of efficiency while also being adequately robust. A new generalized median type estimator is introduced and compared with the MLE and several well-established estimators associated with the methods of moments, trimming, least squares, quantiles, and percentile matching. The method of moments and least squares estimators are found to be relatively deficient with respect to both criteria and should become disfavored, while the trimmed mean and generalized median estimators tend to dominate the other competitors. The generalized median type performs best overall. These findings provide a basis for revision and updating of prevailing viewpoints. Other topics discussed are applications to robust estimation of upper quantiles, tail probabilities, and actuarial quantities, such as stop-loss and excess-of-loss reinsurance premiums that arise concerning solvency of portfolios. Robust parametric methods are compared with empirical nonparametric methods, which are typically nonrobust.  相似文献   

11.
Abstract

This paper shows how Bayesian models within the framework of generalized linear models can be applied to claims reserving. The author demonstrates that this approach is closely related to the Bornhuetter-Ferguson technique. Benktander (1976) and Mack (2000) previously studied the Bornhuetter-Ferguson technique and advocated using credibility models. The present paper uses a Bayesian parametric model within the framework of generalized linear models.  相似文献   

12.
Risk Measurement Performance of Alternative Distribution Functions   总被引:1,自引:0,他引:1  
This paper evaluates the performance of three extreme value distributions, i.e., generalized Pareto distribution (GPD), generalized extreme value distribution (GEV), and Box‐Cox‐GEV, and four skewed fat‐tailed distributions, i.e., skewed generalized error distribution (SGED), skewed generalized t (SGT), exponential generalized beta of the second kind (EGB2), and inverse hyperbolic sign (IHS) in estimating conditional and unconditional value at risk (VaR) thresholds. The results provide strong evidence that the SGT, EGB2, and IHS distributions perform as well as the more specialized extreme value distributions in modeling the tail behavior of portfolio returns. All three distributions produce similar VaR thresholds and perform better than the SGED and the normal distribution in approximating the extreme tails of the return distribution. The conditional coverage and the out‐of‐sample performance tests show that the actual VaR thresholds are time varying to a degree not captured by unconditional VaR measures. In light of the fact that VaR type measures are employed in many different types of financial and insurance applications including the determination of capital requirements, capital reserves, the setting of insurance deductibles, the setting of reinsurance cedance levels, as well as the estimation of expected claims and expected losses, these results are important to financial managers, actuaries, and insurance practitioners.  相似文献   

13.
In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation of distributions with a slowly varying tail. An example from risk theory, comparing ruin probabilities for a classical risk process with Pareto distributed claim sizes, is presented and exact known ruin probabilities for the Pareto case are compared to the ones obtained by approximating by an infinite-dimensional hyper-exponential distribution.  相似文献   

14.
We calculate the probability of failure of the Norwegian banking sector both before and after the Norwegian banking crisis. Thus unlike previous studies of this kind we choose a sample period and banking sector where there were significant numbers of bank failures. This approach therefore gives us a better indication of the quality of the calculated risk measure. Our results indicate evidence of a steep increase in the risk inherent in this sector beginning in 1984 following the deregulation of Norwegian banks in the mid 1980s. We also find that risk levels in the sector fall after 1992 and continue to fall to pre-1982 levels by the end of our sample in December 1995.  相似文献   

15.
Parametric term structure models have been successfully applied to numerous problems in fixed income markets, including pricing, hedging, managing risk, as well as to the study of monetary policy implications. In turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test whether no-arbitrage affects forecasting. We construct cross-sectional (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on US Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and root mean square errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.  相似文献   

16.
This article proposes a theoretical framework that is built upon extreme value theory to study three instruments (i.e. margin, capital requirement and price limits) for managing default risk in futures markets. Specifically, the exceedances over a price threshold are modeled using a generalized Pareto distribution, and the models are static (one-period). We incorporate the risk attitudes of clearing firms into the framework to investigate the efficacy of these instruments under several risk measures, including value-at-risk measures, expected-shortfall measures and spectral risk measures. An empirical study on the VIX futures (or VX) data shows that the effectiveness of these market instruments rests not only on clearing firms' risk attitudes, but also on the tail fatness of the futures price distribution. Moreover, the shift in the risk attitudes of clearing firms may cause interactions among these instruments, which casts new light on the economic rationale of price limits.  相似文献   

17.
We present a fully data driven strategy to incorporate continuous risk factors and geographical information in an insurance tariff. A framework is developed that aligns flexibility with the practical requirements of an insurance company, the policyholder and the regulator. Our strategy is illustrated with an example from property and casualty (P&C) insurance, namely a motor insurance case study. We start by fitting generalized additive models (GAMs) to the number of reported claims and their corresponding severity. These models allow for flexible statistical modeling in the presence of different types of risk factors: categorical, continuous, and spatial risk factors. The goal is to bin the continuous and spatial risk factors such that categorical risk factors result which captures the effect of the covariate on the response in an accurate way, while being easy to use in a generalized linear model (GLM). This is in line with the requirement of an insurance company to construct a practical and interpretable tariff that can be explained easily to stakeholders. We propose to bin the spatial risk factor using Fisher’s natural breaks algorithm and the continuous risk factors using evolutionary trees. GLMs are fitted to the claims data with the resulting categorical risk factors. We find that the resulting GLMs approximate the original GAMs closely, and lead to a very similar premium structure.  相似文献   

18.
We use stock market data to analyze the quality of alternative models and procedures for forecasting expected shortfall (ES) at different significance levels. We compute ES forecasts from conditional models applied to the full distribution of returns as well as from models that focus on tail events using extreme value theory (EVT). We also apply the semiparametric filtered historical simulation (FHS) approach to ES forecasting to obtain 10-day ES forecasts. At the 10-day horizon we combine FHS with EVT. The performance of the different models is assessed using six different ES backtests recently proposed in the literature. Our results suggest that conditional EVT-based models produce more accurate 1-day and 10-day ES forecasts than do non-EVT based models. Under either approach, asymmetric probability distributions for return innovations tend to produce better forecasts. Incorporating EVT in parametric or semiparametric approaches also improves ES forecasting performance. These qualitative results are also valid for the recent crisis period, even though all models then underestimate the level of risk. FHS narrows the range of numerical forecasts obtained from alternative models, thereby reducing model risk. Combining EVT and FHS seems to be best approach for obtaining accurate ES forecasts.  相似文献   

19.
20.
Abstract

Extreme value theory describes the behavior of random variables at extremely high or low levels. The application of extreme value theory to statistics allows us to fit models to data from the upper tail of a distribution. This paper presents a statistical analysis of advanced age mortality data, using extreme value models to quantify the upper tail of the distribution of human life spans.

Our analysis focuses on mortality data from two sources. Statistics Canada publishes the annual number of deaths in Canada, broken down by angender and age. We use the deaths data from 1949 to 1997 in our analysis. The Japanese Ministry of Health, Labor, and Welfare also publishes detailed annual mortality data, including the 10 oldest reported ages at death in each year. We analyze the Japanese data over the period from 1980 to 2000.

Using the r-largest and peaks-over-threshold approaches to extreme value modeling, we fit generalized extreme value and generalized Pareto distributions to the life span data. Changes in distribution by birth cohort or over time are modeled through the use of covariates. We then evaluate the appropriateness of the fitted models and discuss reasons for their shortcomings. Finally, we use our findings to address the existence of a finite upper bound on the life span distribution and the behavior of the force of mortality at advanced ages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号