首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
ABSTRACT

The Tweedie family, which is classified by the choice of power unit variance function, includes heavy tailed distributions, and as such could be of significant relevance to actuarial science. The class includes the Normal, Poisson, Gamma, Inverse Gaussian, Stable and Compound Poisson distributions. In this study, we explore the intrinsic objective Bayesian point estimator for the mean value of the Tweedie family based on the intrinsic discrepancy loss function – which is an inherent loss function arising only from the underlying distribution or model, without any subjective considerations – and the Jeffreys prior distribution, which is designed to express absence of information about the quantity of interest. We compare the proposed point estimator with the Bayes estimator, which is the posterior mean based on quadratic loss function and the Jeffreys prior distribution. We carry a numerical study to illustrate the methodology in the context of the Inverse Gaussian model, which is fully unexplored in this novel context, and which is useful to insurance contracts.  相似文献   

2.

This paper considers the collective risk model for the insurance claims process. We will adopt a Bayesian point of view, where uncertainty concerning the specification of the prior distribution is a common question. The robust Bayesian approach uses a class of prior distributions which model uncertainty about the prior, instead of a single distribution. Relatively little research has dealt with robustness with respect to ratios of posterior expectations as occurs with the Esscher and Variance premium principles. Appropriate techniques are developed in this paper to solve this problem using the k -contamination class in the collective risk model.  相似文献   

3.
In this paper, a new class of composite model is proposed for modeling actuarial claims data of mixed sizes. The model is developed using the Stoppa distribution and a mode-matching procedure. The use of the Stoppa distribution allows for more flexibility over the thickness of the tail, and the mode-matching procedure gives a simple derivation of the model compositing with a variety of distributions. In particular, the Weibull–Stoppa and the Lognormal–Stoppa distributions are investigated. Their performance is compared with existing composite models in the context of the well-known Danish fire insurance data-set. The results suggest the composite Weibull–Stoppa model outperforms the existing composite models in all seven goodness-of-fit measures considered.  相似文献   

4.
Bivariate distributions, specified in terms of their conditional distributions, provide a powerful tool to obtain flexible distributions. These distributions play an important role in specifying the conjugate prior in certain multi-parameter Bayesian settings. In this paper, the conditional specification technique is applied to look for more flexible distributions than the traditional ones used in the actuarial literature, as the Poisson, negative binomial and others. The new specification draws inferences about parameters of interest in problems appearing in actuarial statistics. Two unconditional (discrete) distributions obtained are studied and used in the collective risk model to compute the right-tail probability of the aggregate claim size distribution. Comparisons with the compound Poisson and compound negative binomial are made.  相似文献   

5.
In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation of distributions with a slowly varying tail. An example from risk theory, comparing ruin probabilities for a classical risk process with Pareto distributed claim sizes, is presented and exact known ruin probabilities for the Pareto case are compared to the ones obtained by approximating by an infinite-dimensional hyper-exponential distribution.  相似文献   

6.
Despite its wide use, the Hill estimator and its plot remain to be difficult to use in Extreme Value Theory (EVT) due to substantial sampling variations in extreme sample quantiles. In this paper, we propose a new plot we call the eigenvalue plot which can be seen as a generalization of the Hill plot. The theory behind the plot is based on a heavy-tailed parametric distribution class called the scaled Log phase-type (LogPH) distributions, a generalization of the ordinary LogPH distribution class which was previously used to model insurance claims data. We show that its tail property and moment condition are well aligned with EVT. Based on our findings, we construct the eigenvalue plot from fitting a shifted PH distribution to the excess log data with a minimal phase size. Through various numerical examples we illustrate and compare our method against the Hill plot.  相似文献   

7.
This article offers an alternative proof of the capital asset pricing model (CAPM) when asset returns follow a multivariate elliptical distribution. Empirical studies continue to demonstrate the inappropriateness of the normality assumption for modeling asset returns. The class of elliptically contoured distributions, which includes the more familiar Normal distribution, provides flexibility in modeling the thickness of tails associated with the possibility that asset returns take extreme values with nonnegligible probabilities. As summarized in this article, this class preserves several properties of the Normal distribution. Within this framework, we prove a new version of Stein's lemma for this class of distributions and use this result to derive the CAPM when returns are elliptical. Furthermore, using the probability distortion function approach based on the dual utility theory of choice under uncertainty, we also derive an explicit form solution to call option prices when the underlying is log‐elliptically distributed. The Black–Scholes call option price is a special case of this general result when the underlying is log‐normally distributed.  相似文献   

8.
This paper introduces a new family of multivariate distributions based on Gram–Charlier and Edgeworth expansions. This family encompasses many of the univariate semi-non-parametric densities proposed in financial econometrics as marginal of its different formulations. Within this family, we focus on the analysis of the specifications that guarantee positivity to obtain well-defined multivariate semi-non-parametric densities. We compare two different multivariate distributions of the family with the multivariate Edgeworth–Sargan, Normal, Student's t and skewed Student's t in an in- and out-of-sample framework for financial returns data. Our results show that the proposed specifications provide a reasonably good performance, and would therefore be of interest for applications involving the modelling and forecasting of heavy-tailed distributions.  相似文献   

9.
In this paper, Bayesian methods with both Jeffreys and conjugate priors for estimating parameters of the lognormal–Pareto composite (LPC) distribution are considered. With Jeffreys prior, the posterior distributions for parameters of interest are derived and their properties are described. The conjugate priors are proposed and the conditional posterior distributions are provided. In addition, simulation studies are performed to obtain the upper percentage points of Kolmogorov–Smirnov and Anderson–Darling test statistics. Furthermore, these statistics are used to compare Bayesian and likelihood estimators. In order to clarify and advance the validity of Bayesian and likelihood estimators of the LPC distribution, well-known Danish fire insurance data-set is reanalyzed.  相似文献   

10.
This article presents a new credibility estimation of the probability distributions of risks under Bayes settings in a completely nonparametric framework. In contrast to the Ferguson's Bayesian nonparametric method, it does not need to specify a mathematical form of the prior distribution (such as a Dirichlet process). We then show the applications of the method in general insurance premium pricing, a procedure commonly known as experience rating, which utilizes the insured's claim experience to calculate a proper premium under a given premium principle (referred to as a risk measure). As this method estimates the probability distributions of losses, not just the means and variances, it provides a unified nonparametric framework to experience rating for arbitrary premium principles. This encompasses the advantages of the well-known Bühlmann's and Ferguson's approaches, while it overcomes their drawbacks. We first establish a linear Bayes method and prove its strong consistency in nonparametric settings that require only knowledge of the first two moments of the loss distributions considered as a stochastic process. Then an empirical Bayes method is developed for the more general situation where a portfolio of risks is observed but no knowledge is available or assumed on their loss and prior distributions, including their moments. It is shown to be asymptotically optimal. The performance of our estimates in comparison with traditional methods is also evaluated through theoretical analysis and numerical studies, which show that our approach produces premium estimates close to the optima.  相似文献   

11.
Nonlinear Mean Reversion in the Short-Term Interest Rate   总被引:3,自引:0,他引:3  
Using a new Bayesian method for the analysis of diffusion processes,this article finds that the nonlinear drift in interest ratesfound in a number of previous studies can be confirmed onlyunder prior distributions that are best described as informative.The assumption of stationarity, which is common in the literature,represents a nontrivial prior belief about the shape of thedrift function. This belief and the use of "flat" priors contributestrongly to the finding of nonlinear mean reversion. Implementationof an approximate Jeffreys prior results in virtually no evidencefor mean reversion in interest rates unless stationarity isassumed. Finally, the article documents that nonlinear driftis primarily a feature of daily rather than monthly data, andthat these data contain a transitory element that is not reflectedin the volatility of longer-maturity yields.  相似文献   

12.
The factor analysis model has been widely applied to study finance problems. The purpose of this paper is to introduce a Bayesian approach for analysing the factor analysis model. The advantages of the proposed Bayesian approach over the classical maximum likelihood rest on its capability to incorporate additional prior information, to determine the number of factors in an objective manner, and to produce parameter and factor score estimates with good statistical properties. Based on recently developed tools in statistical computing, such as the Gibbs sampler and path sampling, methods for obtaining the Bayesian estimates of the parameters and factor scores, and a procedure for computing the Bayes factor for selecting the appropriate number of factors in the model, are developed. The proposed new methodologies are applied to analyse a data set taken from the Hong Kong stock security market. It is found that a three-factor model with a generic market factor can be used to describe the systematic components of asset returns.  相似文献   

13.
This paper proposes a new class of estimators based on the interquantile range of intraday returns, referred to as interquantile range based volatility (IQRBV), to estimate the integrated daily volatility. More importantly and intuitively, it is shown that a properly chosen IQRBV is jump-free for its trimming of the intraday extreme two tails that utilize the range between symmetric quantiles. We exploit its approximation optimality by examining a general class of distributions from the Pearson type IV family and recommend using IQRBV.04 as the integrated variance estimate. Both our simulation and the empirical results highlight interesting features of the easy-to-implement and model-free IQRBV over the other competing estimators that are seen in the literature.  相似文献   

14.
Data insufficiency and reporting threshold are two main issues in operational risk modelling. When these conditions are present, maximum likelihood estimation (MLE) may produce very poor parameter estimates. In this study, we first investigate four methods to estimate the parameters of truncated distributions for small samples—MLE, expectation-maximization algorithm, penalized likelihood estimators, and Bayesian methods. Without any proper prior information, Jeffreys’ prior for truncated distributions is used. Based on a simulation study for the log-normal distribution, we find that the Bayesian method gives much more credible and reliable estimates than the MLE method. Finally, an application to the operational loss severity estimation using real data is conducted using the truncated log-normal and log-gamma distributions. With the Bayesian method, the loss distribution parameters and value-at-risk measure for every cell with loss data can be estimated separately for internal and external data. Moreover, confidence intervals for the Bayesian estimates are obtained via a bootstrap method.  相似文献   

15.
It has been a while since the literature on the pricing kernel puzzle was summarized in Jackwerth (Option-implied risk-neutral distributions and risk-aversion, The Research Foundation of AIMR, Charlotteville, 2004). That older survey also covered the topic of risk-neutral distributions, which was itself already surveyed in Jackwerth (J Deriv 2:66–82, 1999). Much has happened in those years and estimation of risk-neutral distributions has moved from new and exciting in the last half of the 1990s to becoming a well-understood technology. Thus, the present survey will focus on the pricing kernel puzzle, which was first discussed around 2000. We document the pricing kernel puzzle in several markets and present the latest evidence concerning its (non-)existence. Econometric studies are detailed which test for the pricing kernel puzzle. The present work adds much breadth in terms of economic explanations of the puzzle. New challenges for the field are described in the process.  相似文献   

16.
The use of mixture distributions for modeling asset returns has a long history in finance. New methods of demonstrating support for the presence of mixtures in the multivariate case are provided. The use of a two-component multivariate normal mixture distribution, coupled with shrinkage via a quasi-Bayesian prior, is motivated, and shown to be numerically simple and reliable to estimate, unlike the majority of multivariate GARCH models in existence. Equally important, it provides a clear improvement over use of GARCH models feasible for use with a large number of assets, such as constant conditional correlation, dynamic conditional correlation, and their extensions, with respect to out-of-sample density forecasting. A generalization to a mixture of multivariate Laplace distributions is motivated via univariate and multivariate analysis of the data, and an expectation–maximization algorithm is developed for its estimation in conjunction with a quasi-Bayesian prior. It is shown to deliver significantly better forecasts than the mixed normal, with fast and numerically reliable estimation. Crucially, the distribution theory required for portfolio theory and risk assessment is developed.  相似文献   

17.
Variable annuities are insurance products that contain complex guarantees. To manage the financial risks associated with these guarantees, insurance companies rely heavily on Monte Carlo simulation. However, using Monte Carlo simulation to calculate the fair market values of these guarantees for a large portfolio of variable annuities is extremely time consuming. In this article, we propose the class of GB2 distributions to model the fair market values of guarantees to capture the positive skewness typically observed empirically. Numerical results are used to demonstrate and evaluate the performance of the proposed model in terms of accuracy and speed.  相似文献   

18.
This article derives underlying asset risk-neutral probability distributions of European options on the S&P 500 index. Nonparametric methods are used to choose probabilities that minimize an objective function subject to requiring that the probabilities are consistent with observed option and underlying asset prices. Alternative optimization specifications produce approximately the same implied distributions. A new and fast optimization technique for estimating probability distributions based on maximizing the smoothness of the resulting distribution is proposed. Since the crash, the risk-neutral probability of a three (four) standard deviation decline in the index (about ?36 percent (?46 percent) over a year) is about 10 (100) times more likely than under the assumption of lognormality.  相似文献   

19.
Cybersecurity risk has attracted considerable attention in recent decades. However, the modeling of cybersecurity risk is still in its infancy, mainly because of its unique characteristics. In this study, we develop a framework for modeling and pricing cybersecurity risk. The proposed model consists of three components: the epidemic model, loss function, and premium strategy. We study the dynamic upper bounds for the infection probabilities based on both Markov and non-Markov models. A simulation approach is proposed to compute the premium for cybersecurity risk for practical use. The effects of different infection distributions and dependence among infection processes on the losses are also studied.  相似文献   

20.
A rich variety of probability distributions has been proposed in the actuarial literature for fitting of insurance loss data. Examples include: lognormal, log-t, various versions of Pareto, loglogistic, Weibull, gamma and its variants, and generalized beta of the second kind distributions, among others. In this paper, we supplement the literature by adding the log-folded-normal and log-folded-t families. Shapes of the density function and key distributional properties of the ‘folded’ distributions are presented along with three methods for the estimation of parameters: method of maximum likelihood; method of moments; and method of trimmed moments. Further, large and small-sample properties of these estimators are studied in detail. Finally, we fit the newly proposed distributions to data which represent the total damage done by 827 fires in Norway for the year 1988. The fitted models are then employed in a few quantitative risk management examples, where point and interval estimates for several value-at-risk measures are calculated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号