首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ruin probability of an insurance company is a central topic in risk theory. We consider the classical Poisson risk model when the claim size distribution and the Poisson arrival rate are unknown. Given a sample of inter-arrival times and corresponding claims, we propose a semiparametric estimator of the ruin probability. We establish properties of strong consistency and asymptotic normality of the estimator and study bootstrap confidence bands. Further, we present a simulation example in order to investigate the finite sample properties of the proposed estimator.  相似文献   

2.

We consider the classical risk model with unknown claim size distribution F and unknown Poisson arrival rate u . Given a sample of claims from F and a sample of interarrival times for these claims, we construct an estimator for the function Z ( u ), which gives the probability of non-ruin in that model for initial surplus u . We obtain strong consistency and asymptotic normality for that estimator for a large class of claim distributions F . Confidence bounds for Z ( u ) based on the bootstrap are also given and illustrated by some numerical examples.  相似文献   

3.
In this paper, we consider the nonparametric estimation of the Gerber–Shiu function in a compound Poisson risk model perturbed by diffusion. We present a more efficient estimator based on Fourier–Sinc series expansion. Our estimator is easily computed and has a faster convergence rate. Some simulation examples are provided to show that the estimator performs well when the sample size is finite.  相似文献   

4.
5.
We study tail estimation in Pareto-like settings for datasets with a high percentage of randomly right-censored data, and where some expert information on the tail index is available for the censored observations. This setting arises for instance naturally for liability insurance claims, where actuarial experts build reserves based on the specificity of each open claim, which can be used to improve the estimation based on the already available data points from closed claims. Through an entropy-perturbed likelihood, we derive an explicit estimator and establish a close analogy with Bayesian methods. Embedded in an extreme value approach, asymptotic normality of the estimator is shown, and when the expert is clair-voyant, a simple combination formula can be deduced, bridging the classical statistical approach with the expert information. Following the aforementioned combination formula, a combination of quantile estimators can be naturally defined. In a simulation study, the estimator is shown to often outperform the Hill estimator for censored observations and recent Bayesian solutions, some of which require more information than usually available. Finally we perform a case study on a motor third-party liability insurance claim dataset, where Hill-type and quantile plots incorporate ultimate values into the estimation procedure in an intuitive manner.  相似文献   

6.
Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid–ask bounce and so in the past, practitioners were advised to drop most of the data and sample at most every five minutes or so. Recently, however, numerous alternative estimators have been developed that make more efficient use of the available data and improve substantially over those based on sparsely sampled returns. Yet, from a practical viewpoint, the choice of which particular estimator to use is not a trivial one because the study of their relative merits has primarily focused on the speed of convergence to their asymptotic distributions, which in itself is not necessarily a reliable guide to finite sample performance (especially when the assumptions on the price or noise process are violated). In this paper we compare a comprehensive set of nineteen realized variance estimators using simulated data from an artificial “zero-intelligence” market that has been shown to mimic some key properties of actual markets. In evaluating the competing estimators, we concentrate on efficiency but also pay attention to implementation, practicality, and robustness. One of our key findings is that for scenarios frequently encountered in practice, the best variance estimator is not always the one suggested by theory. In fact, an ad hoc implementation of a subsampling estimator, realized kernel, or maximum likelihood realized variance, delivers the best overall result. We make firm practical recommendations on choosing and implementing a realized variance estimator, as well as data sampling.  相似文献   

7.
In this paper we analyse recovery rates on defaulted bonds using the Standard & Poor's/PMD database for the years 1981–1999. Due to the specific nature of the data (observations lie within 0 and 1), we must rely on nonstandard econometric techniques. The recovery rate density is estimated nonparametrically using a beta kernel method. This method is free of boundary bias, and Monte Carlo comparison with competing nonparametric estimators show that the beta kernel density estimator is particularly well suited for density estimation on the unit interval. We challenge the usual market practice to model parametrically recovery rates using a beta distribution calibrated on the empirical mean and variance. This assumption is unable to replicate multimodal distributions or concentration of data at total recovery and total loss. We evaluate the impact of choosing the beta distribution on the estimation of credit Value-at-Risk.  相似文献   

8.
I study the finite sample distribution of one of Ait-Sahalia's(1996c) nonparametric tests of continuous-time models of theshort-term riskless rate. The test rejects true models too oftenbecause interest rate data are highly persistent but the asymptoticdistribution of the test (and of the kernel density estimatoron which the test is based) treats the data as if it were independentlyand identically distributed. To attain the accuracy of the kerneldensity estimator implied by its asymptotic distribution with22 years of data generated from the Vasicek model in fact requires2755 years of data.  相似文献   

9.
Nonparametric Estimation of Expected Shortfall   总被引:2,自引:0,他引:2  
The expected shortfall is an increasingly popular risk measurein financial risk management and it possesses the desired sub-additivityproperty, which is lacking for the value at risk (VaR). We considertwo nonparametric expected shortfall estimators for dependentfinancial losses. One is a sample average of excessive losseslarger than a VaR. The other is a kernel smoothed version ofthe first estimator (Scaillet, 2004 Mathematical Finance), hopingthat more accurate estimation can be achieved by smoothing.Our analysis reveals that the extra kernel smoothing does notproduce more accurate estimation of the shortfall. This is differentfrom the estimation of the VaR where smoothing has been shownto produce reduction in both the variance and the mean squareerror of estimation. Therefore, the simpler ES estimator basedon the sample average of excessive losses is attractive forthe shortfall estimation.  相似文献   

10.
Numerous empirical studies find pricing kernels that are not-monotonically decreasing; the findings are at odds with the pricing kernel being marginal utility of a risk-averse, so-called representative agent. We study in detail the common procedure which estimates the pricing kernel as the ratio of two separate density estimations. In the first step, we analyse theoretically the functional dependence for the ratio of a density to its estimated density; this cautions the reader regarding potential computational issues coupled with statistical techniques. In the second step, we study this quantitatively; we show that small sample biases shape the estimated pricing kernel, and that estimated pricing kernels typically violate the commonly believed monotonicity at the centre even when the true pricing kernel fulfils these. This contributes to an alternative, statistical explanation for the puzzling shape in pricing kernel estimations.  相似文献   

11.
We introduce a continuous-time framework for the prediction of outstanding liabilities, in which chain-ladder development factors arise as a histogram estimator of a cost-weighted hazard function running in reversed development time. We use this formulation to show that under our assumptions on the individual data chain-ladder is consistent. Consistency is understood in the sense that both the number of observed claims grows to infinity and the level of aggregation tends to zero. We propose alternatives to chain-ladder development factors by replacing the histogram estimator with kernel smoothers and by estimating a cost-weighted density instead of a cost-weighted hazard. Finally, we provide a real-data example and a simulation study confirming the strengths of the proposed alternatives.  相似文献   

12.
This paper investigates the ordering properties of the largest claim amounts and sample ranges arising from two sets of heterogeneous portfolios. First, some sufficient conditions are provided in the sense of the usual stochastic ordering to compare the largest claim amounts from two sets of independent or interdependent claims. Second, comparison results on the largest claim amounts in the sense of the reversed hazard rate and hazard rate orderings are established for two batches of heterogeneous independent claims. Finally, we present sufficient conditions to stochastically compare sample ranges from two sets of heterogeneous claims by means of the usual stochastic ordering. Some numerical examples are also given to illustrate the theoretical findings. The results established here not only extend and generalize those known in the literature, but also provide insight that will be useful to lay down the annual premiums of policyholders.  相似文献   

13.
The well known Proportional Hazard Premium Principle, introduced by Wang (1996), depends upon the survival function of the insured risk and a risk aversion index. Using this premium principle, we propose an asymptotically normal semi-parametric estimator for the net-premium of a high-excess loss layer of heavy-tailed claim amounts. An algorithm to compute confidence bounds is given. Moreover, a comparison between this estimator and the non-parametric estimator, proposed by Necir & Boukhetala (2004), is carried out.  相似文献   

14.
In this paper, we propose a new efficient method for estimating the Gerber–Shiu discounted penalty function in the classical risk model. We develop the Gerber–Shiu function on the Laguerre basis, and then estimate the unknown coefficients based on sample information on claim numbers and individual claim sizes. The convergence rate of the estimate is derived. Some simulation examples are illustrated to show that the estimate performs very well when the sample size is finite. We also show that the proposed estimate outperforms other estimates in the simulation studies.  相似文献   

15.
We claim that regressing excess returns on one-lagged volatility provides only a limited picture of the dynamic effect of idiosyncratic risk, which tends to be persistent over time. By correcting for the serial correlation in idiosyncratic volatility, we find that idiosyncratic volatility has a significant positive effect. This finding seems robusrt for various firm size portfolios, sample periods, and measures of idiosyncratic risk. Our findings suggest stock markets mis-price idiosyncratic risk. There may be some measurement problems with idiosyncratic risk. There may be some measurement problems with idiosyncratic risk that could be related to nondiversifiable risk.  相似文献   

16.
Abstract

We study the asymptotic tail behaviour of reinsured amounts of the LCR and ECOMOR treaties under a time-dependent renewal risk model, in which a dependence structure is introduced between each claim size and the interarrival time before it. Assuming that the claim size distribution has a subexponential tail, we derive some precise asymptotic results for both treaties.  相似文献   

17.
Abstract

Extract

While in some linear estimation problems the principle of unbiasedness can be said to be appropriate, we have just seen that in the present context we will have to appeal to other criteria. Let us first consider what we get from the maximum likelihood method. We do not claim any particular optimum property for this estimate of the risk distribution: it seems plausible however that one can prove a large sample result analogous to the classical result on maximum likelihood estimation.  相似文献   

18.
If one is interested in managing fraud, one must measure the fraud rate to be able to assess the degree of the problem and the effectiveness of the fraud management technique. This article offers a robust new method for estimating fraud rate, PRIDIT‐FRE (PRIDIT‐based Fraud Rate Estimation), developed based on PRIDIT, an unsupervised fraud detection method to assess individual claim fraud suspiciousness. PRIDIT‐FRE presents the first nonparametric unsupervised estimator of the actual rate of fraud in a population of claims, robust to the bias contained in an audited sample (arising from the quality or individual hubris of an auditor or investigator, or the natural data‐gathering process through claims adjusting). PRIDIT‐FRE exploits the internal consistency of fraud predictors and makes use of a small audited sample or an unaudited sample only. Using two insurance fraud data sets with different characteristics, we illustrate the effectiveness of PRIDIT‐FRE and examine its robustness in varying scenarios.  相似文献   

19.
In the context of an insurance portfolio which provides dividend income for the insurance company’s shareholders, an important problem in risk theory is how the premium income will be paid to the shareholders as dividends according to a barrier strategy until the next claim occurs whenever the surplus attains the level of ‘barrier’. In this paper, we are concerned with the estimation of optimal dividend barrier, defined as the level of the barrier that maximizes the expected discounted dividends until ruin, under the widely used compound Poisson model as the aggregate claims process. We propose a semi-parametric statistical procedure for estimation of the optimal dividend barrier, which is critically needed in applications. We first construct a consistent estimator of the objective function that is complexly related to the expected discounted dividends and then the estimated optimal dividend barrier as the minimizer of the estimated objective function. In theory, we show that the constructed estimator of the optimal dividend barrier is statistically consistent. Numerical experiments by both simulated and real data analyses demonstrate that the proposed estimators work reasonably well with an appropriate size of samples.  相似文献   

20.
In this paper, we examine investor's risk preferences implied by option prices. In order to derive these preferences, we specify the functional form of a pricing kernel and then shift its parameters until realized returns are best explained by the subjective probability density function, which consists of the ratio of the risk-neutral probability density function and the pricing kernel. We examine, alternatively, pricing kernels of power, exponential, and higher order polynomial forms. Using S&P 500 index options, we find surprising evidence of risk neutrality, instead of risk aversion, in both the power and exponential cases. When extending the underlying assumption on the specification of the pricing kernel to one of higher order polynomial functions, we obtain functions exhibiting ‘monotonically decreasing’ relative risk aversion (DRRA) and anomalous ‘inverted U-shaped’ relative risk aversion. We find, however, that only the DRRA function is robust to variation in sample characteristics, and is statistically significant. Finally, we also find that most of our empirical results are consistent, even when taking into account market imperfections such as illiquidity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号