首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Abstract

This article investigates performance of interval estimators of various actuarial risk measures. We consider the following risk measures: proportional hazards transform (PHT), Wang transform (WT), value-at-risk (VaR), and conditional tail expectation (CTE). Confidence intervals for these measures are constructed by applying nonparametric approaches (empirical and bootstrap), the strict parametric approach (based on the maximum likelihood estimators), and robust parametric procedures (based on trimmed means).

Using Monte Carlo simulations, we compare the average lengths and proportions of coverage (of the true measure) of the intervals under two data-generating scenarios: “clean” data and “contaminated” data. In the “clean” case, data sets are generated by the following (similar shape) parametric families: exponential, Pareto, and lognormal. Parameters of these distributions are selected so that all three families are equally risky with respect to a fixed risk measure. In the “contaminated” case, the “clean” data sets from these distributions are mixed with a small fraction of unusual observations (outliers). It is found that approximate knowledge of the underlying distribution combined with a sufficiently robust estimator (designed for that distribution) yields intervals with satisfactory performance under both scenarios.  相似文献   

2.
Abstract

This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three worked illustrative examples, two of which feature actual insurance claims data.  相似文献   

3.
Abstract

In this paper we consider different approximations for computing the distribution function or risk measures related to a discrete sum of nonindependent lognormal random variables. Comonotonic upper and lower bound approximations for such sums have been proposed in Dhaene et al. (2002a,b). We introduce the comonotonic “maximal variance” lower bound approximation. We also compare the comonotonic approximations with two well-known moment-matching approximations: the lognormal and the reciprocal Gamma approximations. We find that for a wide range of parameter values the comonotonic “maximal variance” lower bound approximation outperforms the other approximations.  相似文献   

4.
Summary

An estimator which is a linear function of the observations and which minimises the expected square error within the class of linear estimators is called an “optimal linear” estimator. Such an estimator may also be regarded as a “linear Bayes” estimator in the spirit of Hartigan (1969). Optimal linear estimators of the unknown mean of a given data distribution have been described by various authors; corresponding “linear empirical Bayes” estimators have also been developed.

The present paper exploits the results of Lloyd (1952) to obtain optimal linear estimators based on order statistics of location or/and scale parameter (s) of a continuous univariate data distribution. Related “linear empirical Bayes” estimators which can be applied in the absence of the exact knowledge of the optimal estimators are also developed. This approach allows one to extend the results to the case of censored samples.  相似文献   

5.
We study a family of distributions generated from multiply monotone functions that includes a multivariate Pareto and, previously unidentified, exponential-Pareto distribution. We utilize an established link with Archimedean survival copulas to provide further examples, including a multivariate Weibull distribution, that may be used to fit light, or heavy-tailed phenomena, and which exhibit various forms of dependence, ranging from positive to negative. Because the model is intended for the study of joint lifetimes, we consider the effect of truncation and formulate properties required for a number of parameter estimation procedures based on moments and quantiles. For the quantile-based estimation procedure applied to the multivariate Weibull distribution, we also address the problem of optimal quantile selection.  相似文献   

6.
Abstract

It is shown how one may construct tests and confidence regions for the unknown structural parameters in empirical linear Bayes estimation problems. The case of the collateral units having varying “designs” (i.e. regressor and covariance matrices) may be treated under the assumption that the design variables independently follow a common statistical law. The results are of an asymptotic nature.  相似文献   

7.
Summary

Large sample estimation of the origin (α1 and the scale parameter (α2 of the gamma distribution when the shape parameter m is known is considered. Assuming both parameters are unknown, the optimum spacings (0<λ12<...λ k <1) determining the maximum efficiences among other choices of the same number of observations are obtained. The coefficients to be used in computing the estimates, their variances and their asymptotic relative efficiencies (A.R.E.) relative to the Cramer Rao lower bounds are given.  相似文献   

8.
Abstract

This paper proposes a computationally efficient algorithm for quantifying the impact of interestrate risk and longevity risk on the distribution of annuity values in the distant future. The algorithm simulates the state variables out to the end of the horizon period and then uses a Taylor series approximation to compute approximate annuity values at the end of that period, thereby avoiding a computationally expensive “simulation-within-simulation” problem. Illustrative results suggest that annuity values are likely to rise considerably but are also quite uncertain. These findings have some unpleasant implications both for defined contribution pension plans and for defined benefit plan sponsors considering using annuities to hedge their exposure to these risks at some point in the future.  相似文献   

9.
Abstract

Statistical extreme value theory provides a flexible and theoretically well motivated approach to the study of large losses in insurance. We give a brief review of the modem version of this theory and a “step by step” example of how to use it in large claims insurance. The discussion is based on a detailed investigation of a wind storm insurance problem. New results include a simulation study of estimators in the peaks over thresholds method with Generalised Pareto excesses, a discussion of Pareto and lognormal modelling and of methods to detect trends. Further results concern the use of meteorological information in the wind storm insurance and, of course, the results of the study of the wind storm claims.  相似文献   

10.
Abstract

Background

Insurance accounting is generally speaking based upon the idea that a comparison shall be made between “premiums earned” and “claims incurred”. Even if there are exceptions in different countries and in different classes of business the method where premiums earned and claims incurred are compared is so widely used that we will take this method as our starting point for a discussion of the shortcomings, if any, of insurance accounting.  相似文献   

11.
In this paper, we study issues related to the optimal portfolio estimators and the local asymptotic normality (LAN) of the return process under the assumption that the return process has an infinite moving average (MA) (∞) representation with skew-normal innovations. The paper consists of two parts. In the first part, we discuss the influence of the skewness parameter δ of the skew-normal distribution on the optimal portfolio estimators. Based on the asymptotic distribution of the portfolio estimator ? for a non-Gaussian dependent return process, we evaluate the influence of δ on the asymptotic variance V(δ) of ?. We also investigate the robustness of the estimators of a standard optimal portfolio via numerical computations. In the second part of the paper, we assume that the MA coefficients and the mean vector of the return process depend on a lower-dimensional set of parameters. Based on this assumption, we discuss the LAN property of the return's distribution when the innovations follow a skew-normal law. The influence of δ on the central sequence of LAN is evaluated both theoretically and numerically.  相似文献   

12.
1. Introduction.

The two cases where a normal distribution is “truncated” at a known point have been treated by R. A. Fisher (1) and W. L. Stevens (2), respectively. Fisher treated the case in which all record is omitted of observations below a given value, while Stevens treated the case in which the frequency of observations below a given value is recorded but the individual values of these observations are not specified. In both cases the distribution is usually termed truncated. In the first case, admittedly, the observations form a random sample drawn from an incomplete normal distribution, but in the second case we sample from a complete normal distribution in which the obtainable information in a sense has been censored, either by nature or by ourselves. To distinguish between the two cases the distributions will be called truncated and censored 1 The term “censored” was suggested to me by Mr J. E. Kerrich. , respectively. The term “point of truncation” will be used for both.  相似文献   

13.
Summary

In the present paper we study the problem of optimal stratifications for estimating the mean vector y of a given multivariate distribution F(x) with covariance matrix ζ both in cases of proportionate and of optimal (or generalized Neyman) allocations. It is noted that an “optimal stratification” is meant for one to make the covariance matrix of an unbiased estimator X for μ minimal, in the sense of semi-order defined below, in the symmetric matrix space. We show the existence of an optimal stratification and the necessary conditions for a stratification to be optimal. Besides we prove that an optimal stratification can be represented by a “hyperplane stratification” or a “quadratic hypersurface stratification” according to the proportionate or optimal (or generalized Neyman) allocation, and that the set of all optimal (or admissible) stratifications is a minimal complete class in the analogous sense of decision theory. Further we discuss the optimal stratification when a criterion based on a suitable real-valued function is adopted instead of the semi-order.  相似文献   

14.
Evolving volatility is a dominant feature observed in most financial time series and a key parameter used in option pricing and many other financial risk analyses. A number of methods for non-parametric scale estimation are reviewed and assessed with regard to the stylized features of financial time series. A new non-parametric procedure for estimating historical volatility is proposed based on local maximum likelihood estimation for the t-distribution. The performance of this procedure is assessed using simulated and real price data and is found to be the best among estimators we consider. We propose that it replaces the moving variance historical volatility estimator.  相似文献   

15.
This paper studies the parameter estimation problem for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. Estimation is regarded as the principal challenge in applying these models since they were proposed by Barndorff-Nielsen and Shephard [J. R. Stat. Soc. Ser. B, 2001, 63(2), 167–241]. Most previous work has used a Bayesian paradigm, whereas we treat the problem in the framework of maximum likelihood estimation, applying gradient-based simulation optimization. A hidden Markov model is introduced to formulate the likelihood of observations; sequential Monte Carlo is applied to sample the hidden states from the posterior distribution; smooth perturbation analysis is used to deal with the discontinuities introduced by jumps in estimating the gradient. Numerical experiments indicate that the proposed gradient-based simulated maximum likelihood estimation approach provides an efficient alternative to current estimation methods.  相似文献   

16.
Abstract

It was the Swiss actuary Chr. Moser who, in lectures at Bern University at the turn of the century, gave the name “self-renewing aggregate” to what Vajda (1947) has called the “unstationary community” of lives, namely where deaths at any epoch are immediately replaced by an equivalent number of births. It was Moser too (1926) who coined the expression “steady state” for the stationary community in which the age distribution at any time follows the life table (King, 1887). With such a distinguished actuarial history, excellently summarized by Saxer (1958, Ch. IV), it behoves every actuary to know at least the definitions and modus operandi of today's so-called renewal (point), or recurrent event, processes.  相似文献   

17.
Abstract

In this paper we develop several composite Weibull-Pareto models and suggest their use to model loss payments and other forms of actuarial data. These models all comprise a Weibull distribution up to a threshold point, and some form of Pareto distribution thereafter. They are similar in spirit to some composite lognormal-Pareto models that have previously been considered in the literature. All of these models are applied, and their performance compared, in the context of a real-world fire insurance data set.  相似文献   

18.
Abstract

Extract

While in some linear estimation problems the principle of unbiasedness can be said to be appropriate, we have just seen that in the present context we will have to appeal to other criteria. Let us first consider what we get from the maximum likelihood method. We do not claim any particular optimum property for this estimate of the risk distribution: it seems plausible however that one can prove a large sample result analogous to the classical result on maximum likelihood estimation.  相似文献   

19.
ABSTRACT

The “Belt and Road Initiative” has involved deepening infrastructure construction along the “Belt and Road”. Using data from countries who have joined the “Belt and Road”, this study examines how infrastructure construction has affected economic development along the route. Findings show that infrastructure construction can promote economic growth and per capita output growth while improving income distribution of residents along the “Belt and Road”. Results also indicate that the effect of infrastructure construction on economic development is heterogeneous; such construction can substantially increase economic growth in developing countries but has no significant effect on economic growth in developed and emerging developing countries. Infrastructure construction can greatly improve residents’ income distribution in developed and developing countries but has no significant effect on residents in emerging developing countries. Collectively, these findings identify foreign direct investment and urbanization as important channels through which infrastructure construction can influence economic development.  相似文献   

20.
ABSTRACT

In this note, we consider a nonstandard analytic approach to the examination of scale functions in some special cases of spectrally negative Lévy processes. In particular, we consider the compound Poisson risk process with or without perturbation from an independent Brownian motion. New explicit expressions for the first and second scale functions are derived which complement existing results in the literature. We specifically consider cases where the claim size distribution is gamma, uniform or inverse Gaussian. Some ruin-related quantities will also be re-examined in light of the aforementioned results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号