首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We conduct a simulation analysis of the Fama and MacBeth[1973. Risk, returns and equilibrium: empirical tests. Journal of Political Economy 71, 607–636.] two-pass procedure, as well as maximum likelihood (ML) and generalized method of moments estimators of cross-sectional expected return models. We also provide some new analytical results on computational issues, the relations between estimators, and asymptotic distributions under model misspecification. The generalized least squares estimator is often much more precise than the usual ordinary least squares (OLS) estimator, but it displays more bias as well. A “truncated” form of ML performs quite well overall in terms of bias and precision, but produces less reliable inferences than the OLS estimator.  相似文献   

2.
This paper characterizes the behavior of observed asset prices under price limits and proposes the use of two-limit truncated and Tobit regression models to analyze regression models whose dependent variable is subject to price limits. Through a proper arrangement of the sample, these two models, the estimation of which is easy to implement, are applied only to subsets of the sample under study, rather than the full sample. Using the estimation of simple linear regression model as an example, several Monte Carlo experiments are conducted to compare the performance of the maximum likelihood estimators (MLEs) based on these two models and a generalized method of moments (GMM) estimator developed by K. C. John Wei and R. Chiang. The results show that under different price limits and various distributional assumptions for the error terms, the MLEs based on the two-limit Tobit and truncated regression models and the GMM estimator perform reasonably well, while the naive OLS estimator is downward biased. Overall, the MLE based on the two-limit Tobit model outperforms the other estimators.  相似文献   

3.
Abstract

This article investigates performance of interval estimators of various actuarial risk measures. We consider the following risk measures: proportional hazards transform (PHT), Wang transform (WT), value-at-risk (VaR), and conditional tail expectation (CTE). Confidence intervals for these measures are constructed by applying nonparametric approaches (empirical and bootstrap), the strict parametric approach (based on the maximum likelihood estimators), and robust parametric procedures (based on trimmed means).

Using Monte Carlo simulations, we compare the average lengths and proportions of coverage (of the true measure) of the intervals under two data-generating scenarios: “clean” data and “contaminated” data. In the “clean” case, data sets are generated by the following (similar shape) parametric families: exponential, Pareto, and lognormal. Parameters of these distributions are selected so that all three families are equally risky with respect to a fixed risk measure. In the “contaminated” case, the “clean” data sets from these distributions are mixed with a small fraction of unusual observations (outliers). It is found that approximate knowledge of the underlying distribution combined with a sufficiently robust estimator (designed for that distribution) yields intervals with satisfactory performance under both scenarios.  相似文献   

4.
This paper utilizes asymptotic analysis and daily security returns to examine the estimation efficiency of two unbiased robust estimators compared with ordinary least squares. Our results demonstrate a relative efficiency gain for a nonparametric rank estimator and a relative efficiency loss for the minimum absolute deviation estimator when estimating the systematic risk of securities using daily security returns.  相似文献   

5.
In this paper we demonstrate that robust estimators improve the reliability of estimates of beta coefficients on small, thinly traded stock markets. We outline several different types of robust and bounded influence regression estimators and assess them using a jackknife methodology on data from the Johannesburg Stock Exchange. The empirical evidence confirms the hypothesis that robust estimators are more efficient than least squares estimators and indicates that least squares estimators may over-estimate systematic risk in some cases.  相似文献   

6.
Is the Short Rate Drift Actually Nonlinear?   总被引:7,自引:0,他引:7  
Aït-Sahalia (1996) and Stanton (1997) use nonparametric estimators applied to short-term interest rate data to conclude that the drift function contains important nonlinearities. We study the finite-sample properties of their estimators by applying them to simulated sample paths of a square-root diffusion. Although the drift function is linear, both estimators suggest nonlinearities of the type and magnitude reported in Aït-Sahalia (1996) and Stanton (1997). Combined with the results of a weighted least squares estimator, this evidence implies that nonlinearity of the short rate drift is not a robust stylized fact.  相似文献   

7.
Abstract

As is well known in actuarial practice, excess claims (outliers) have a disturbing effect on the ratemaking process. To obtain better estimators of premiums, which are based on credibility theory, Künsch and Gisler and Reinhard suggested using robust methods. The estimators proposed by these authors are indeed resistant to outliers and serve as an excellent example of how useful robust models can be for insurance pricing. In this article we further refine these procedures by reducing the degree of heuristic arguments they involve. Specifically we develop a class of robust estimators for the credibility premium when claims are approximately gamma-distributed and thoroughly study their robustness-efficiency trade-offs in large and small samples. Under specific datagenerating scenarios, this approach yields quantitative indices of estimators’ strength and weakness, and it allows the actuary (who is typically equipped with information beyond the statistical model) to choose a procedure from a full menu of possibilities. Practical performance of our methods is illustrated under several simulated scenarios and by employing expert judgment.  相似文献   

8.
The stability of estimates is critical when applying advanced measurement approaches (AMA) such as loss distribution approach (LDA) for operational risk capital modeling. Recent studies have identified issues associated with capital estimates by applying the maximum likelihood estimation (MLE) method for truncated distributions: significant upward mean-bias, considerable uncertainty about the estimates, and non-robustness to both small and large losses. Although alternative estimation approaches have been proposed, there has not been any comprehensive study of how alternative approaches perform compared to the MLE method. This paper is the first comprehensive study on the performance of various potentially promising alternative approaches (including minimum distance approach, quantile distance approach, scaling-based bias correction, upward scaling of lower quantiles, and right-truncated distributions) as compared to MLE with regards to accuracy, precision and robustness. More importantly, based on the properties of each estimator, we propose a right-truncation with probability weighted least squares method, by combining the right-truncated distribution and minimizing a probability weighted distance (i.e., the quadratic upper-tail Anderson–Darling distance), and we find it significantly reduces the bias and volatility of capital estimates and improves the robustness of capital estimates to small losses near the threshold or moving the threshold, demonstrated by both simulation results and real data application.  相似文献   

9.
The accuracy of real estate indices: Repeat sale estimators   总被引:2,自引:2,他引:0  
Simulation techniques allow us to examine the behavior and accuracy of several repeat sales regression estimators used to construct real estate return indices. We show that the generalized least squares (GLS) method is the maximum likelihood estimator, and we show how estimation accuracy can be significantly improved through a Baysian approach. In addition, we introduce a biased estimation procedure based upon the James and Stein method to address the problems of multicollinearity common to the procedure.  相似文献   

10.
This article examines the impact of cognitive skills on theincome of households in Ghana. It uses scores on mathematicsand English tests to measure cognitive skills and estimatesthe returns to these skills based on farm profit, off-farm income,and total income. The article uses Powell's censored least absolutedeviations and symmetrically trimmed least squares estimatorsto estimate farm and off-farm income. In contrast to Heckman'stwo-step or the Tobit estimator, Powell's estimators are consistentin the presence of heteroscedasticity and are robust to otherviolations of normality. The results show that cognitive skillshave a positive effect on total and off-farm income but do nothave a statistically significant effect on farm income.  相似文献   

11.
Although Tobin's q is an attractive theoretical firm performance measure, its empirical construction is subject to considerable measurement error. In this paper we compare five estimators of q that range from a simple-to-construct estimator based on book-values to a relatively complex estimator based upon the methodology developed by Lindenberg and Ross (1981). We present comparisons of the means, medians and variances of the q estimates, and examine how robust sorting and regression results are to changes in the construction of q. We find that empirical results are sensitive to the method used to estimate Tobin's q. The simple-to-construct estimator produces empirical results that differ significantly from the alternative estimators. Among the other four estimators, one developed by Hall (1990) produces means that are higher and variances that are larger than the three alternative estimators, but does approximate those estimators in most of the empirical comparisons. Those three alternative q ratio estimators, furthermore, produce empirical results that are robust.  相似文献   

12.
In this article, we examine a generalized version of an identity made famous by Stein, who constructed the so-called Stein's Lemma in the special case of a normal distribution. Other works followed to extend the lemma to the larger class of elliptical distributions. The lemma has had many applications in statistics, finance, insurance, and actuarial science. In an attempt to broaden the application of this generalized identity, we consider the version in the case where we investigate only the tail portion of the distribution of a random variable. Understanding the tails of a distribution is very important in actuarial science and insurance. Our article therefore introduces the concept of the “tail Stein's identity” to the case of any random variable defined on an appropriate probability space with a Lebesgue density function satisfying certain regularity conditions. We also examine this “tail Stein's identity” to the class of discrete distributions. This extended identity allows us to develop recursive formulas for generating tail conditional moments. As examples and illustrations, we consider several classes of distributions including the exponential family, and we apply this result to demonstrate how to generate tail conditional moments. This holds a large promise for applications in risk management.  相似文献   

13.
The probabilistic behavior of the claim severity variable plays a fundamental role in calculation of deductibles, layers, loss elimination ratios, effects of inflation, and other quantities arising in insurance. Among several alternatives for modeling severity, the parametric approach continues to maintain the leading position, which is primarily due to its parsimony and flexibility. In this article, several parametric families are employed to model severity of Norwegian fire claims for the years 1981 through 1992. The probability distributions we consider include generalized Pareto, lognormal-Pareto (two versions), Weibull-Pareto (two versions), and folded-t. Except for the generalized Pareto distribution, the other five models are fairly new proposals that recently appeared in the actuarial literature. We use the maximum likelihood procedure to fit the models and assess the quality of their fits using basic graphical tools (quantile-quantile plots), two goodness-of-fit statistics (Kolmogorov-Smirnov and Anderson-Darling), and two information criteria (AIC and BIC). In addition, we estimate the tail risk of “ground up” Norwegian fire claims using the value-at-risk and tail-conditional median measures. We monitor the tail risk levels over time, for the period 1981 to 1992, and analyze predictive performances of the six probability models. In particular, we compute the next-year probability for a few upper tail events using the fitted models and compare them with the actual probabilities.  相似文献   

14.
The paper investigates the properties of a portfolio composed of a large number of assets driven by a strong multivariate GARCH(1,1) process with heterogeneous parameters. The aggregate return is shown to be a weak GARCH process with a (possibly large) number of lags, which reflects the moments of the distribution of the individual persistence parameters. The paper describes a consistent estimator of the aggregate return dynamics, based on nonlinear least squares. The proposed aggregation-corrected estimator (ACE) performs very well and outperforms some competing estimators in forecasting the daily variance of U.S. stocks portfolios at different horizons.  相似文献   

15.
We study tail estimation in Pareto-like settings for datasets with a high percentage of randomly right-censored data, and where some expert information on the tail index is available for the censored observations. This setting arises for instance naturally for liability insurance claims, where actuarial experts build reserves based on the specificity of each open claim, which can be used to improve the estimation based on the already available data points from closed claims. Through an entropy-perturbed likelihood, we derive an explicit estimator and establish a close analogy with Bayesian methods. Embedded in an extreme value approach, asymptotic normality of the estimator is shown, and when the expert is clair-voyant, a simple combination formula can be deduced, bridging the classical statistical approach with the expert information. Following the aforementioned combination formula, a combination of quantile estimators can be naturally defined. In a simulation study, the estimator is shown to often outperform the Hill estimator for censored observations and recent Bayesian solutions, some of which require more information than usually available. Finally we perform a case study on a motor third-party liability insurance claim dataset, where Hill-type and quantile plots incorporate ultimate values into the estimation procedure in an intuitive manner.  相似文献   

16.
A rich variety of probability distributions has been proposed in the actuarial literature for fitting of insurance loss data. Examples include: lognormal, log-t, various versions of Pareto, loglogistic, Weibull, gamma and its variants, and generalized beta of the second kind distributions, among others. In this paper, we supplement the literature by adding the log-folded-normal and log-folded-t families. Shapes of the density function and key distributional properties of the ‘folded’ distributions are presented along with three methods for the estimation of parameters: method of maximum likelihood; method of moments; and method of trimmed moments. Further, large and small-sample properties of these estimators are studied in detail. Finally, we fit the newly proposed distributions to data which represent the total damage done by 827 fires in Norway for the year 1988. The fitted models are then employed in a few quantitative risk management examples, where point and interval estimates for several value-at-risk measures are calculated.  相似文献   

17.
Abstract

A credibility estimator is Bayes in the restricted class of linear estimators and may be viewed as a linear approximation to the (unrestricted) Bayes estimator. When the structural parameters occurring in a credibility formula are replaced by consistent estimators based on data from a collective of similar risks,we obtain an empirical credibility estimator, which is a credibility counterpart of empirical Bayes estimators. Empirical credibility estimators are proposed under various model assumptions, and sufficient conditions for asymptotic optimality are established.  相似文献   

18.
Understanding the dependence among economies is relevant to policy makers, central banks and investors in the decision-making process. One important issue for study is the existence of contagion among economies. This work considers the Canonical Model of Contagion by Pesaran and Pick (Journal of Economic Dynamics and Control, 2007), which differentiates contagion from interdependence. The ordinary least squares estimator of this model is biased by the endogenous variables in the model. In this study, instrumental variables are used to decrease the bias of the ordinary least squares estimator. The model is extended to the case of heteroskedastic errors, features that are generally found in financial data. We postulate the conditional volatility of the performance indices as instrumental variables and analyze the validity of these instruments using Monte Carlo simulations. Monte Carlo simulations estimate the distributions of the estimators under the null hypothesis. Finally, the canonical model of contagion is used to analyze the contagion among seven Asian countries.  相似文献   

19.
A time homogeneous, purely discontinuous, parsimonous Markov martingale model is proposed for the risk neutral dynamics of equity forward prices. Transition probabilities are in the variance gamma class with spot dependent parameters. Markov chain approximations give access to option prices. The model is estimated on option prices across strike and maturity for five days at a time. Properties of the estimated processes are described via an analysis of return quantiles, momentum functions that measure the response of tail probabilities to such moves. Momentum and reversion are also addressed via the construction of reverse conditional expectations. Term structures for the moments of marginal distributions support a decay in skewness and excess kurtosis with maturity at rates slower than those implied by Lévy processes. Out of sample performance is additionally reported. It is observed that risk neutral dynamics by and large reflect the presence of momentum in numerous probabilities. However, there is some reversion in the upper quantiles of risk neutral return distributions.  相似文献   

20.
Ordinary least squares (OLS) regression is relatively sensitive to the presence of outliers in a data set. In this paper, a robust estimation method, least median of squares (LMS) is used to identify outliers in land value data. Once the outliers are identified, are the land value equations re-estimated. The results show that a few observations can have a significant effect on the estimated coefficients. Finally, the observations which were identified as outliers were examined in more detail. One cause of outliers is an omitted variable. In this case, a large fraction of the outliers were found to be observations with high development potential.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号