首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Tweedie [11] investigated properties of the Inverse Gaussian distribution. We define in this paper the Inverse Gaussian process. For the discrete case we find the density function of the functions of Inverse Gaussian variates. We look for covariance function and stochastic integral as well as conditional density functions of an Inverse Gaussian process.  相似文献   

2.
Analytical procedures are evaluations of account and transaction flow information made by a study of plausible relationships between both accounting and non‐accounting data. This study investigates the performance of Tweedie distributions (which have Gaussian distributions as members) in improving fit of zero‐inflated, non‐negative, kurtotic and multimodal analytical review data. The study found that account valuations are more informative than marginal data in analytical review, that mixture Poisson–Gamma distributions offer better fit than Gaussian distributions, even under assumptions of central limit theorem convergence, and that mixture Poisson–Gamma distributions provide better predictions of future account and transaction volumes and values. Model performance improvement with price versus returns data in this empirical study was substantial: from less than one‐quarter of variance, to almost two‐thirds. Tweedie generalized linear model risk assessments were found to be a magnitude smaller than traditional risk assessments, lending support to market inefficiency and increased risk from idiosyncratic factors. An example with several differing distributions shows that use of mixture distributions instead of point estimation can reduce sample size while retaining the power of the audit tests. The results of this study are increasingly important as accounting datasets are growing exponentially larger over time, requiring well‐defined roles for models, algorithms, data and narrative which can only be achieved with statistical protocols and algorithmic languages.  相似文献   

3.
Data insufficiency and reporting threshold are two main issues in operational risk modelling. When these conditions are present, maximum likelihood estimation (MLE) may produce very poor parameter estimates. In this study, we first investigate four methods to estimate the parameters of truncated distributions for small samples—MLE, expectation-maximization algorithm, penalized likelihood estimators, and Bayesian methods. Without any proper prior information, Jeffreys’ prior for truncated distributions is used. Based on a simulation study for the log-normal distribution, we find that the Bayesian method gives much more credible and reliable estimates than the MLE method. Finally, an application to the operational loss severity estimation using real data is conducted using the truncated log-normal and log-gamma distributions. With the Bayesian method, the loss distribution parameters and value-at-risk measure for every cell with loss data can be estimated separately for internal and external data. Moreover, confidence intervals for the Bayesian estimates are obtained via a bootstrap method.  相似文献   

4.
Abstract

Current formulas in credibility theory often estimate expected claims as a function of the sample mean of the experience claims of a policyholder. An actuary may wish to estimate future claims as a function of some statistic other than the sample arithmetic mean of claims, such as the sample geometric mean. This can be suggested to the actuary through the exercise of regressing claims on the geometric mean of prior claims. It can also be suggested through a particular probabilistic model of claims, such as a model that assumes a lognormal conditional distribution. In the first case, the actuary may lean towards using a linear function of the geometric mean, depending on the results of the data analysis. On the other hand, through a probabilistic model, the actuary may want to use the most accurate estimator of future claims, as measured by squared-error loss. However, this estimator might not be linear.

In this paper, I provide a method for balancing the conflicting goals of linearity and accuracy. The credibility estimator proposed minimizes the expectation of a linear combination of a squared-error term and a second-derivative term. The squared-error term measures the accuracy of the estimator, while the second-derivative term constrains the estimator to be close to linear. I consider only those families of distributions with a one-dimensional sufficient statistic and estimators that are functions of that sufficient statistic or of the sample mean. Claim estimators are evaluated by comparing their conditional mean squared errors. In general, functions of the sufficient statistics prove to be better credibility estimators than functions of the sample mean.  相似文献   

5.
ABSTRACT

In the context of predicting future claims, a fully Bayesian analysis – one that specifies a statistical model, prior distribution, and updates using Bayes's formula – is often viewed as the gold-standard, while Bühlmann's credibility estimator serves as a simple approximation. But those desirable properties that give the Bayesian solution its elevated status depend critically on the posited model being correctly specified. Here we investigate the asymptotic behavior of Bayesian posterior distributions under a misspecified model, and our conclusion is that misspecification bias generally has damaging effects that can lead to inaccurate inference and prediction. The credibility estimator, on the other hand, is not sensitive at all to model misspecification, giving it an advantage over the Bayesian solution in those practically relevant cases where the model is uncertain. This begs the question: does robustness to model misspecification require that we abandon uncertainty quantification based on a posterior distribution? Our answer to this question is No, and we offer an alternative Gibbs posterior construction. Furthermore, we argue that this Gibbs perspective provides a new characterization of Bühlmann's credibility estimator.  相似文献   

6.

We consider the classical risk model with unknown claim size distribution F and unknown Poisson arrival rate u . Given a sample of claims from F and a sample of interarrival times for these claims, we construct an estimator for the function Z ( u ), which gives the probability of non-ruin in that model for initial surplus u . We obtain strong consistency and asymptotic normality for that estimator for a large class of claim distributions F . Confidence bounds for Z ( u ) based on the bootstrap are also given and illustrated by some numerical examples.  相似文献   

7.

This paper considers the collective risk model for the insurance claims process. We will adopt a Bayesian point of view, where uncertainty concerning the specification of the prior distribution is a common question. The robust Bayesian approach uses a class of prior distributions which model uncertainty about the prior, instead of a single distribution. Relatively little research has dealt with robustness with respect to ratios of posterior expectations as occurs with the Esscher and Variance premium principles. Appropriate techniques are developed in this paper to solve this problem using the k -contamination class in the collective risk model.  相似文献   

8.
Systematic longevity risk is increasingly relevant for public pension schemes and insurance companies that provide life benefits. In view of this, mortality models should incorporate dependence between lives. However, the independent lifetime assumption is still heavily relied upon in the risk management of life insurance and annuity portfolios. This paper applies a multivariate Tweedie distribution to incorporate dependence, which it induces through a common shock component. Model parameter estimation is developed based on the method of moments and generalized to allow for truncated observations. The estimation procedure is explicitly developed for various important distributions belonging to the Tweedie family, and finally assessed using simulation.  相似文献   

9.
Abstract

Dufresne et al. (1991) introduced a general risk model defined as the limit of compound Poisson processes. Such a model is either a compound Poisson process itself or a process with an infinite number of small jumps. Later, in a series of now classical papers, the joint distribution of the time of ruin, the surplus before ruin, and the deficit at ruin was studied (Gerber and Shiu 1997, 1998a, 1998b; Gerber and Landry 1998). These works use the classical and the perturbed risk models and hint that the results can be extended to gamma and inverse Gaussian risk processes.

In this paper we work out this extension to a generalized risk model driven by a nondecreasing Lévy process. Unlike the classical case that models the individual claim size distribution and obtains from it the aggregate claims distribution, here the aggregate claims distribution is known in closed form. It is simply the one-dimensional distribution of a subordinator. Embedded in this wide family of risk models we find the gamma, inverse Gaussian, and generalized inverse Gaussian processes. Expressions for the Gerber-Shiu function are given in some of these special cases, and numerical illustrations are provided.  相似文献   

10.
Nonlinear Mean Reversion in the Short-Term Interest Rate   总被引:3,自引:0,他引:3  
Using a new Bayesian method for the analysis of diffusion processes,this article finds that the nonlinear drift in interest ratesfound in a number of previous studies can be confirmed onlyunder prior distributions that are best described as informative.The assumption of stationarity, which is common in the literature,represents a nontrivial prior belief about the shape of thedrift function. This belief and the use of "flat" priors contributestrongly to the finding of nonlinear mean reversion. Implementationof an approximate Jeffreys prior results in virtually no evidencefor mean reversion in interest rates unless stationarity isassumed. Finally, the article documents that nonlinear driftis primarily a feature of daily rather than monthly data, andthat these data contain a transitory element that is not reflectedin the volatility of longer-maturity yields.  相似文献   

11.
Abstract

This paper provides a new and accessible approach to establishing certain results concerning the discounted penalty function. The direct approach consists of two steps. In the first step, closed-form expressions are obtained in the special case in which the claim amount distribution is a combination of exponential distributions. A rational function is useful in this context. For the second step, one observes that the family of combinations of exponential distributions is dense. Hence, it suffices to reformulate the results of the first step to obtain general results. The surplus process has downward and upward jumps, modeled by two independent compound Poisson processes. If the distribution of the upward jumps is exponential, a series of new results can be obtained with ease. Subsequently, certain results of Gerber and Shiu [H. U. Gerber and E. S. W. Shiu, North American Actuarial Journal 2(1): 48–78 (1998)] can be reproduced. The two-step approach is also applied when an independent Wiener process is added to the surplus process. Certain results are related to Zhang et al. [Z. Zhang, H. Yang, and S. Li, Journal of Computational and Applied Mathematics 233: 1773–1 784 (2010)], which uses different methods.  相似文献   

12.

In this paper, we derive two-sided bounds for the ruin probability in the compound Poisson risk model when the adjustment coefficient of the individual claim size distribution does not exist. These bounds also apply directly to the tails of compound geometric distributions. The upper bound is tighter than that of Dickson (1994). The corresponding lower bound, which holds under the same conditions, is tighter than that of De Vylder and Goovaerts (1984). Even when the adjustment coefficient exists, the upper bound is, in some cases, tighter than Lundberg's bound. These bounds are applicable for any positive distribution function with a finite mean. Examples are given and numerical comparisons with asymptotic formulae for the ruin probability are also considered.  相似文献   

13.
Abstract

Recent advances in statistical decision theory and stochastic processes provide the machinery for showing that the celebrated mean credibility formula is a Bayes rule within a nonparametric context. The credibility factor is obtained as a simple function of the parameter that characterizes the prior distribution. A natural estimator of leads to a credibility formula having a form similar to the James-Stein estimator.  相似文献   

14.
Abstract

The seminal paper by Gerber and Shiu (1998) unified and extended the study of the event of ruin and related quantities, including the time at which the event of ruin occurs, the deficit at the time of ruin, and the surplus immediately prior to ruin. The first two of these quantities are fundamentally important for risk management techniques that utilize the ideas of Value-at-Risk and Tail Value-at-Risk. As is well known, calculation of these and related quantities requires knowledge of the associated probability distributions. In this paper we derive an explicit expression for the joint (defective) distribution of the time to ruin, the surplus immediately prior to ruin, and the deficit at ruin in the classical compound Poisson risk model. As a by-product, we obtain expressions for the three bivariate distributions generated by the time to ruin, the surplus prior to ruin, and the deficit at ruin. Finally, we consider mixed Erlang claim sizes and show how the joint (defective) distribution of the time to ruin, the surplus prior to ruin, and the deficit at ruin can be calculated.  相似文献   

15.
In this paper, Bayesian methods with both Jeffreys and conjugate priors for estimating parameters of the lognormal–Pareto composite (LPC) distribution are considered. With Jeffreys prior, the posterior distributions for parameters of interest are derived and their properties are described. The conjugate priors are proposed and the conditional posterior distributions are provided. In addition, simulation studies are performed to obtain the upper percentage points of Kolmogorov–Smirnov and Anderson–Darling test statistics. Furthermore, these statistics are used to compare Bayesian and likelihood estimators. In order to clarify and advance the validity of Bayesian and likelihood estimators of the LPC distribution, well-known Danish fire insurance data-set is reanalyzed.  相似文献   

16.
Abstract

This paper considers a family of counting distributions whose densities satisfy certain second order difference equations. Recursions for the evaluation of related compound distributions are developed in the case of severity distributions which are concentrated on the non-negative integers. From these a characterization of the considered counting distributions is obtained, and it is shown that most of these are compound Poisson distributions.  相似文献   

17.
Many empirical studies have shown that financial asset returns do not always exhibit Gaussian distributions, for example hedge fund returns. The introduction of the family of Johnson distributions allows a better fit to empirical financial data. Additionally, this class can be extended to a quite general family of distributions by considering all possible regular transformations of the standard Gaussian distribution. In this framework, we consider the portfolio optimal positioning problem, which has been first addressed by Brennan and Solanki [J. Financial Quant. Anal., 1981, 16, 279–300], Leland [J. Finance, 1980, 35, 581–594] and further developed by Carr and Madan [Quant. Finance, 2001, 1, 9–37] and Prigent [Generalized option based portfolio insurance. Working Paper, THEMA, University of Cergy-Pontoise, 2006]. As a by-product, we introduce the notion of Johnson stochastic processes. We determine and analyse the optimal portfolio for log return having Johnson distributions. The solution is characterized for arbitrary utility functions and illustrated in particular for a CRRA utility. Our findings show how the profiles of financial structured products must be selected when taking account of non Gaussian log-returns.  相似文献   

18.
Recursive formulae are derived for the evaluation of the t-th order cumulative distribution function and the t-th order tail probability of compound mixed Poisson distributions in the case where the derivative of the logarithm of the mixing density can be written as a ratio of polynomials. Also, some general results are derived for the evaluation of the t-th order moments of stop-loss transforms. The recursions can be applied for the exact evaluation of the probability function, distribution function, tail probability and stop-loss premium of compound mixed Poisson distributions and the corresponding mixed Poisson distributions. Several examples are also presented.  相似文献   

19.
Cramér–Von Mises (CVM) inference techniques are developed for some positive flexible infinitely divisible parametric families generalizing the compound Poisson family. These larger families appear to be useful for parametric inference for positive data. The methods are based on inverting the characteristic functions. They are numerically implementable whenever the characteristic function has a closed form. In general, likelihood methods based on density functions are more difficult to implement. CVM methods also lead to model testing, with test statistics asymptotically following a chi-square distribution. The methods are for continuous models, but they can also handle models with a discontinuity point at the origin such as the case of compound Poisson models. Simulation studies seem to suggest that CVM estimators are more efficient than moment estimators for the common range of the compound Poisson gamma family. Actuarial applications include estimation of the stop loss premium, and estimation of the present value of cash flows when interest rates are assumed to be driven by a corresponding Lévy process.  相似文献   

20.

This paper derives two-sided bounds for tails of compound negative binomial distributions, both in the exponential and heavy-tailed cases. Two approaches are employed to derive the two-sided bounds in the case of exponential tails. One is the convolution technique, as in Willmot & Lin (1997). The other is based on an identity of compound negative binomial distributions; they can be represented as a compound Poisson distribution with a compound logarithmic distribution as the underlying claims distribution. This connection between the compound negative binomial, Poisson and logarithmic distributions results in two-sided bounds for the tails of the compound negative binomial distribution, which also generalize and improve a result of Willmot & Lin (1997). For the heavy-tailed case, we use the method developed by Cai & Garrido (1999b). In addition, we give two-sided bounds for stop-loss premiums of compound negative binomial distributions. Furthermore, we derive bounds for the stop-loss premiums of general compound distributions among the classes of HNBUE and HNWUE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号