首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

A credibility estimator is Bayes in the restricted class of linear estimators and may be viewed as a linear approximation to the (unrestricted) Bayes estimator. When the structural parameters occurring in a credibility formula are replaced by consistent estimators based on data from a collective of similar risks,we obtain an empirical credibility estimator, which is a credibility counterpart of empirical Bayes estimators. Empirical credibility estimators are proposed under various model assumptions, and sufficient conditions for asymptotic optimality are established.  相似文献   

2.
Abstract

Current formulas in credibility theory often estimate expected claims as a function of the sample mean of the experience claims of a policyholder. An actuary may wish to estimate future claims as a function of some statistic other than the sample arithmetic mean of claims, such as the sample geometric mean. This can be suggested to the actuary through the exercise of regressing claims on the geometric mean of prior claims. It can also be suggested through a particular probabilistic model of claims, such as a model that assumes a lognormal conditional distribution. In the first case, the actuary may lean towards using a linear function of the geometric mean, depending on the results of the data analysis. On the other hand, through a probabilistic model, the actuary may want to use the most accurate estimator of future claims, as measured by squared-error loss. However, this estimator might not be linear.

In this paper, I provide a method for balancing the conflicting goals of linearity and accuracy. The credibility estimator proposed minimizes the expectation of a linear combination of a squared-error term and a second-derivative term. The squared-error term measures the accuracy of the estimator, while the second-derivative term constrains the estimator to be close to linear. I consider only those families of distributions with a one-dimensional sufficient statistic and estimators that are functions of that sufficient statistic or of the sample mean. Claim estimators are evaluated by comparing their conditional mean squared errors. In general, functions of the sufficient statistics prove to be better credibility estimators than functions of the sample mean.  相似文献   

3.
This paper develops a discrete time version of the continuous time model of Bouchard et al. [J. Control Optim., 2009, 48, 3123–3150], for the problem of finding the minimal initial data for a controlled process to guarantee reaching a controlled target with probability one. An efficient numerical algorithm, based on dynamic programming, is proposed for the quantile hedging of standard call and put options, exotic options and quantile hedging with portfolio constraints. The method is then extended to solve utility indifference pricing, good-deal bounds and expected shortfall problems.  相似文献   

4.
ABSTRACT

In this paper, we propose new reinsurance premium principles that minimize the expected weighted loss functions and balance the trade-off between the reinsurer's shortfall risk and the insurer's risk exposure in a reinsurance contract. Random weighting factors are introduced in the weighted loss functions so that weighting factors are based on the underlying insurance risks. The resulting reinsurance premiums depend on both the loss covered by the reinsurer and the loss retained by the insurer. The proposed premiums provide new ways for pricing reinsurance contracts and controlling the risks of both the reinsurer and the insurer. As applications of the proposed principles, the modified expectile reinsurance principle and the modified quantile reinsurance principle are introduced and discussed in details. The properties of the new reinsurance premium principles are investigated. Finally, the comparisons between the new reinsurance premium principles and the classical expectile principle, the classical quantile principle, and the risk-adjusted principle are provided.  相似文献   

5.
Abstract

Estimation of the tail index parameter of a single-parameter Pareto model has wide application in actuarial and other sciences. Here we examine various estimators from the standpoint of two competing criteria: efficiency and robustness against upper outliers. With the maximum likelihood estimator (MLE) being efficient but nonrobust, we desire alternative estimators that retain a relatively high degree of efficiency while also being adequately robust. A new generalized median type estimator is introduced and compared with the MLE and several well-established estimators associated with the methods of moments, trimming, least squares, quantiles, and percentile matching. The method of moments and least squares estimators are found to be relatively deficient with respect to both criteria and should become disfavored, while the trimmed mean and generalized median estimators tend to dominate the other competitors. The generalized median type performs best overall. These findings provide a basis for revision and updating of prevailing viewpoints. Other topics discussed are applications to robust estimation of upper quantiles, tail probabilities, and actuarial quantities, such as stop-loss and excess-of-loss reinsurance premiums that arise concerning solvency of portfolios. Robust parametric methods are compared with empirical nonparametric methods, which are typically nonrobust.  相似文献   

6.
Many empirical researches report that value-at-risk (VaR) measures understate the actual 1% quantile, while for Inui, K., Kijima, M. and Kitano, A., VaR is subject to a significant positive bias. Stat. Probab. Lett., 2005, 72, 299–311. proved that VaR measures overstate significantly when historical simulation VaR is applied to fat-tail distributions. This paper resolves the puzzle by developing a regime switching model to estimate portfolio VaR. It is shown that our model is able to correct the underestimation problem of risk.  相似文献   

7.
Abstract

The question on what statistics to base our credibility estimators, is discussed in a general model. We introduce concepts of sufficiency, completeness, θ-sufficiency, and θ-completeness that are useful in this connection, and use methods of Rao-Blackwell type. Some of the present results are closely related to results by Taylor (1977).  相似文献   

8.
Abstract

As is well known in actuarial practice, excess claims (outliers) have a disturbing effect on the ratemaking process. To obtain better estimators of premiums, which are based on credibility theory, Künsch and Gisler and Reinhard suggested using robust methods. The estimators proposed by these authors are indeed resistant to outliers and serve as an excellent example of how useful robust models can be for insurance pricing. In this article we further refine these procedures by reducing the degree of heuristic arguments they involve. Specifically we develop a class of robust estimators for the credibility premium when claims are approximately gamma-distributed and thoroughly study their robustness-efficiency trade-offs in large and small samples. Under specific datagenerating scenarios, this approach yields quantitative indices of estimators’ strength and weakness, and it allows the actuary (who is typically equipped with information beyond the statistical model) to choose a procedure from a full menu of possibilities. Practical performance of our methods is illustrated under several simulated scenarios and by employing expert judgment.  相似文献   

9.
10.
This paper considers a new approach of analyzing asset dependence by estimating how the distributions (in particular, quantiles) of assets are related. Combining the techniques of quantile regression and copula modeling, I propose the Copula Quantile-on-Quantile Regression approach to estimate the correlation that is associated with the quantiles of asset returns, which is able to uncover obscure nonlinear characteristics in asset dependence. The estimation procedure proposed here can also be used for analyzing dependence structures in other settings, such as for studying how macroeconomic covariates are nonlinearly related by looking at the relationship between their quantiles.  相似文献   

11.
This paper proposes a new class of estimators based on the interquantile range of intraday returns, referred to as interquantile range based volatility (IQRBV), to estimate the integrated daily volatility. More importantly and intuitively, it is shown that a properly chosen IQRBV is jump-free for its trimming of the intraday extreme two tails that utilize the range between symmetric quantiles. We exploit its approximation optimality by examining a general class of distributions from the Pearson type IV family and recommend using IQRBV.04 as the integrated variance estimate. Both our simulation and the empirical results highlight interesting features of the easy-to-implement and model-free IQRBV over the other competing estimators that are seen in the literature.  相似文献   

12.
Summary

An estimator which is a linear function of the observations and which minimises the expected square error within the class of linear estimators is called an “optimal linear” estimator. Such an estimator may also be regarded as a “linear Bayes” estimator in the spirit of Hartigan (1969). Optimal linear estimators of the unknown mean of a given data distribution have been described by various authors; corresponding “linear empirical Bayes” estimators have also been developed.

The present paper exploits the results of Lloyd (1952) to obtain optimal linear estimators based on order statistics of location or/and scale parameter (s) of a continuous univariate data distribution. Related “linear empirical Bayes” estimators which can be applied in the absence of the exact knowledge of the optimal estimators are also developed. This approach allows one to extend the results to the case of censored samples.  相似文献   

13.
Although Tobin's q is an attractive theoretical firm performance measure, its empirical construction is subject to considerable measurement error. In this paper we compare five estimators of q that range from a simple-to-construct estimator based on book-values to a relatively complex estimator based upon the methodology developed by Lindenberg and Ross (1981). We present comparisons of the means, medians and variances of the q estimates, and examine how robust sorting and regression results are to changes in the construction of q. We find that empirical results are sensitive to the method used to estimate Tobin's q. The simple-to-construct estimator produces empirical results that differ significantly from the alternative estimators. Among the other four estimators, one developed by Hall (1990) produces means that are higher and variances that are larger than the three alternative estimators, but does approximate those estimators in most of the empirical comparisons. Those three alternative q ratio estimators, furthermore, produce empirical results that are robust.  相似文献   

14.
Abstract

The log normal reserving model is considered. The contribution of the paper is to derive explicit expressions for the maximum likelihood estimators. These are expressed in terms of development factors which are geometric averages. The distribution of the estimators is derived. It is shown that the analysis is invariant to traditional measures for exposure.  相似文献   

15.
We present an analysis of the VaR forecasts and the P&L series of all 12 German banks that used internal models for regulatory purposes throughout the period from the beginning of 2001 to the end of 2004. One task of a supervisor is to estimate the ‘recalibration factor’, i.e. by how much a bank over- or underestimates its VaR. The Basel traffic light approach to backtesting, which maps the count of exceptions in the trailing year to a multiplicative penalty factor, can be viewed as a way to estimate the ‘recalibration factor’. We introduce techniques that provide a much more powerful inference on the recalibration factor than the Basel approach based on the count of exceptions. The notions ‘return on VaR (RoVaR)’ and ‘well-behaved forecast system’ are keys to linking the problem at hand to the established literature on the evaluation of density forecasts. We perform extensive bootstrapping analyses allowing (1) an assessment of the accuracy of our estimates of the recalibration factor and (2) a comparison of the estimation error of different scale and quantile estimators. Certain robust estimators turn out to outperform the more popular estimators used in the literature. Empirical results for the non-public data are compared to the corresponding results for hypothetical portfolios based on publicly available market data. While these comparisons have to be interpreted with care since the banks' P&L data tend to be more contaminated with errors than the major market indices, they shed light on the similarities and differences between banks' RoVaRs and market index returns.  相似文献   

16.
Abstract

We show how to construct risk invariant (equalizer) linear estimators in credibility models, when the variance components are unknown but constrained by linear equations. Risk invariant linear estimators will often be minimax and admissible. They are useful in situations with unidentifiable variance components, but may also be used when reliable estimates of the variance components are not available.  相似文献   

17.
Will Any q Do?     
We find that the relative levels of computationally costly q estimators and simple q estimators, when used as continuous variables, are affected by variations in many firm financial characteristics. In contrast, when the estimators are used as dichotomous variables, they classify the vast majority of firms identically with respect to the unit q breakpoint. Finally, we find that the computationally costly approach may induce sample‐selection bias as a result of data unavailability. Our results suggest that the simple approach is preferable except when extreme precision of the q estimate is of paramount importance and sample‐selection bias is not likely to be an issue.  相似文献   

18.
In this paper, we study issues related to the optimal portfolio estimators and the local asymptotic normality (LAN) of the return process under the assumption that the return process has an infinite moving average (MA) (∞) representation with skew-normal innovations. The paper consists of two parts. In the first part, we discuss the influence of the skewness parameter δ of the skew-normal distribution on the optimal portfolio estimators. Based on the asymptotic distribution of the portfolio estimator ? for a non-Gaussian dependent return process, we evaluate the influence of δ on the asymptotic variance V(δ) of ?. We also investigate the robustness of the estimators of a standard optimal portfolio via numerical computations. In the second part of the paper, we assume that the MA coefficients and the mean vector of the return process depend on a lower-dimensional set of parameters. Based on this assumption, we discuss the LAN property of the return's distribution when the innovations follow a skew-normal law. The influence of δ on the central sequence of LAN is evaluated both theoretically and numerically.  相似文献   

19.
Using high frequency data for the price dynamics of equities we measure the impact that market microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variance–covariance matrix of n assets. We propose a Kalman-filter-based methodology that allows us to deconstruct price series into the true efficient price and the microstructure noise. This approach allows us to employ volatility estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this price series decomposition allows us to estimate the variance covariance matrix of n assets in a more efficient way than the methods so far proposed in the literature. We illustrate our results by calculating how microstructure noise affects portfolio decisions and calculations of the equity beta in a CAPM setting.  相似文献   

20.
Abstract

A sequence of maximum likelihood estimators based on a sequence of independent but not necessarily identically distributed random variables is shown to be consistent under certain assumptions. Some examples are given to show that these assumptions are easy to verify and not very restrictive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号