首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Estimation of the tail index parameter of a single-parameter Pareto model has wide application in actuarial and other sciences. Here we examine various estimators from the standpoint of two competing criteria: efficiency and robustness against upper outliers. With the maximum likelihood estimator (MLE) being efficient but nonrobust, we desire alternative estimators that retain a relatively high degree of efficiency while also being adequately robust. A new generalized median type estimator is introduced and compared with the MLE and several well-established estimators associated with the methods of moments, trimming, least squares, quantiles, and percentile matching. The method of moments and least squares estimators are found to be relatively deficient with respect to both criteria and should become disfavored, while the trimmed mean and generalized median estimators tend to dominate the other competitors. The generalized median type performs best overall. These findings provide a basis for revision and updating of prevailing viewpoints. Other topics discussed are applications to robust estimation of upper quantiles, tail probabilities, and actuarial quantities, such as stop-loss and excess-of-loss reinsurance premiums that arise concerning solvency of portfolios. Robust parametric methods are compared with empirical nonparametric methods, which are typically nonrobust.  相似文献   

2.
Abstract

This article investigates performance of interval estimators of various actuarial risk measures. We consider the following risk measures: proportional hazards transform (PHT), Wang transform (WT), value-at-risk (VaR), and conditional tail expectation (CTE). Confidence intervals for these measures are constructed by applying nonparametric approaches (empirical and bootstrap), the strict parametric approach (based on the maximum likelihood estimators), and robust parametric procedures (based on trimmed means).

Using Monte Carlo simulations, we compare the average lengths and proportions of coverage (of the true measure) of the intervals under two data-generating scenarios: “clean” data and “contaminated” data. In the “clean” case, data sets are generated by the following (similar shape) parametric families: exponential, Pareto, and lognormal. Parameters of these distributions are selected so that all three families are equally risky with respect to a fixed risk measure. In the “contaminated” case, the “clean” data sets from these distributions are mixed with a small fraction of unusual observations (outliers). It is found that approximate knowledge of the underlying distribution combined with a sufficiently robust estimator (designed for that distribution) yields intervals with satisfactory performance under both scenarios.  相似文献   

3.
Summary

An estimator which is a linear function of the observations and which minimises the expected square error within the class of linear estimators is called an “optimal linear” estimator. Such an estimator may also be regarded as a “linear Bayes” estimator in the spirit of Hartigan (1969). Optimal linear estimators of the unknown mean of a given data distribution have been described by various authors; corresponding “linear empirical Bayes” estimators have also been developed.

The present paper exploits the results of Lloyd (1952) to obtain optimal linear estimators based on order statistics of location or/and scale parameter (s) of a continuous univariate data distribution. Related “linear empirical Bayes” estimators which can be applied in the absence of the exact knowledge of the optimal estimators are also developed. This approach allows one to extend the results to the case of censored samples.  相似文献   

4.
Abstract

A credibility estimator is Bayes in the restricted class of linear estimators and may be viewed as a linear approximation to the (unrestricted) Bayes estimator. When the structural parameters occurring in a credibility formula are replaced by consistent estimators based on data from a collective of similar risks,we obtain an empirical credibility estimator, which is a credibility counterpart of empirical Bayes estimators. Empirical credibility estimators are proposed under various model assumptions, and sufficient conditions for asymptotic optimality are established.  相似文献   

5.
Will Any q Do?     
We find that the relative levels of computationally costly q estimators and simple q estimators, when used as continuous variables, are affected by variations in many firm financial characteristics. In contrast, when the estimators are used as dichotomous variables, they classify the vast majority of firms identically with respect to the unit q breakpoint. Finally, we find that the computationally costly approach may induce sample‐selection bias as a result of data unavailability. Our results suggest that the simple approach is preferable except when extreme precision of the q estimate is of paramount importance and sample‐selection bias is not likely to be an issue.  相似文献   

6.
On "q"     
The introduction of Tobin's q-ratio in the literature has prompted noteworthy advances in economic model-building and empirical analysis. A major shortcoming has been the lack of a convincing measure of marginal q. Herein an alternative approach for measuring marginal q is presented, one based on an event study of capital expenditure announcements. The measures of marginal q are shown to yield reasonable and consistent estimates. The marginal q variables perform well against the hypothesized relationships developed from q theory. A key result is that classifying firms according to their average q measures as a proxy for marginal q is likely to lead to misclassification problems.  相似文献   

7.
The main purpose of this paper is to compare the White (1980) heteroskedasticity-consistent (HC) covariance matrix estimator with alternative estimators. Many regression packages compute the White (1980) heteroskedasticity-consistent (HC) covariance matrix estimator. The common procedure in Accounting and Finance research to deal with the heteroskedasticity problem is based on this estimator, despite its worse finite-samples properties when compared with other consistent estimators. In this paper we compare several HC covariance matrix estimators based on a sample of 3706 European listed companies from Austria, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Norway, Portugal, Spain, Sweden and the United Kingdom. We conclude that HC standard errors increase when finite-samples more appropriate estimators are considered and in the most part of countries the Ohlson (1995) model coefficients estimates became statistically insignificant. This can be explained by the high leverage points in the design matrix. To the best of our knowledge it is the first time that these alternative estimators are compared with the one of White (1980) in accounting research.  相似文献   

8.
We conduct a simulation analysis of the Fama and MacBeth[1973. Risk, returns and equilibrium: empirical tests. Journal of Political Economy 71, 607–636.] two-pass procedure, as well as maximum likelihood (ML) and generalized method of moments estimators of cross-sectional expected return models. We also provide some new analytical results on computational issues, the relations between estimators, and asymptotic distributions under model misspecification. The generalized least squares estimator is often much more precise than the usual ordinary least squares (OLS) estimator, but it displays more bias as well. A “truncated” form of ML performs quite well overall in terms of bias and precision, but produces less reliable inferences than the OLS estimator.  相似文献   

9.
This study extends the HAR-RV model to detailedly compare the role of leverage effects, jumps, and overnight information in predicting the realized volatilities (RV) of 21 international equity indices. First, the in-sample results suggest that these three factors have significantly negative impact for most of international equity markets. Second, the out-of-sample predictive results show that leverage effects and overnight information have stronger predictive power than jumps. Furthermore, we provide convincing results that the use of these three factors simultaneously can produce the best predictions for almost international equity markets at all forecast horizons. Finally, the empirical results from alternative prediction window, Direction-of-Change test, out-of-sample R2 test, alternative loss functions, and alternative volatility estimator confirm our results are robust.  相似文献   

10.
Financial time series have two features which, in many cases, prevent the use of conventional estimators of volatilities and correlations: leptokurtotic distributions and contamination of data with outliers. Other techniques are required to achieve stable and accurate results. In this paper, we review robust estimators for volatilities and correlations and identify those best suited for use in risk management. The selection criteria were that the estimator should be stable to both fractionally small departures for all data points (fat tails), and to fractionally large departures for a small number of data points (outliers). Since risk management typically deals with thousands of time series at once, another major requirement was the independence of the approach of any manual correction or data pre-processing. We recommend using volatility t-estimators, for which we derived the estimation error formula for the case when the exact shape of the data distribution is unknown. A convenient robust estimator for correlations is Kendall's tau, whose drawback is that it does not guarantee the positivity of the correlation matrix. We chose to use geometric optimization that overcomes this problem by finding the closest correlation matrix to a given matrix in terms of the Hadamard norm. We propose the weights for the norm and demonstrate the efficiency of the algorithm on large-scale problems.  相似文献   

11.
The classical ratio estimator is one of the auxiliary information estimators frequently discussed in the audit sampling literature. The major weakness of this estimator is its unreliability when accounting populations have only one-sided errors or when the error rate is low. Efforts have been made to improve the classical estimator by using techniques such as the Jackknifed ratio discussed by Frost and Tamura (1982). This paper proposes a new method to estimate the population total error based on the ratio of error over book value, i.e., taintings.

The special features of the proposed procedure are that (1) it specifically models the special characteristics of the typical accounting populations, and (2) it is the first study we know of in the audit sampling literature that uses simulation to capture the characteristics of the specific distribution of the estimator each time a confidence interval is constructed. This new approach became possible because of the recent publication of several studies on the empirical characteristics of accounting errors. Results of empirical tests indicate that the proposed method can significantly improve the reliability of the classical ratio under circumstances where the classical ratio needs improvements. Empirical comparisons are also made with a third ratio estimator under dollar-unit sampling. Again, the proposed method provides better reliability.  相似文献   

12.
Investment-cash flow sensitivity has declined and disappeared, even during the 2007-2009 credit crunch. If one believes that financial constraints have not disappeared, then investment-cash flow sensitivity cannot be a good measure of financial constraints. The decline and disappearance are robust to considerations of R&D and cash reserves, and across groups of firms. The information content in cash flow regarding investment opportunities has declined, but measurement error in Tobin's q does not completely explain the patterns in investment-cash flow sensitivity. The decline and disappearance cannot be explained by changes in sample composition, corporate governance, or market power—and remain a puzzle.  相似文献   

13.
Current real estate statistical valuation involves the estimation of parameters within a posited specification. Suchparametric estimation requires judgment concerning model (1) variables; and (2) functional form. In contrast,nonparametric regression estimation requires attention to (1) but permits greatly reduced attention to (2). Parametric estimators functionally model the parameters and variables affectingE(y¦x) while nonparametric estimators directly modelpdf(y, x) and henceE(y¦x).This article applies the kernel nonparametric regression estimator to two different data sets and specifications. The article shows the nonparametric estimator outperforms the standard parametric estimator (OLS) across variable transformations and across data subsets differing in quality. In addition, the article reviews properties of nonparametric estimators, presents the history of nonparametric estimators in real estate, and discusses a representation of the kernel estimator as a nonparametric grid method.  相似文献   

14.
This article makes two important contributions to the literature on the incentive effects of insider ownership. First, it presents a clean method for separating the positive wealth effect of insider ownership from the negative entrenchment effect, which can be applied to samples of companies from the US and any other country. Second, it measures the effects of insider ownership using a measure of firm performance, namely a marginal q, which ensures that the causal relationship estimated runs from ownership to performance. The article applies this method to a large sample of publicly listed firms from the Anglo-Saxon and Civil law traditions and confirms that managerial entrenchment has an unambiguous negative effect on firm performance as measured by both Tobin's (average) q and our marginal q, and that the wealth effect of insider ownership is unambiguously positive for both measures. We also test for the effects of ownership concentration for other categories of owners and find that while institutional ownership improves the performance in the USA, financial institutions have a negative impact in other Anglo-Saxon countries and in Europe.  相似文献   

15.
Finland experienced an extremely severe economic depression in the early 1990s. As a part of the government's crisis management policies, significant new legislation was passed that increased supervisory powers of financial market regulators and reformed bankruptcy procedures significantly decreasing the protection of creditors. We show that the introduction of these new laws resulted in positive abnormal stock returns. The new laws also lead to increases in firms’ Tobin's q, especially for more levered firms. In contrast to previous studies, our results also suggest that public supervision of financial markets fosters rather than hampers financial market development.  相似文献   

16.
Shrinkage estimators of the covariance matrix are known to improve the stability over time of the Global Minimum Variance Portfolio (GMVP), as they are less error-prone. However, the improvement over the empirical covariance matrix is not optimal for small values of n, the estimation sample size. For typical asset allocation problems, with n small, this paper aims at proposing a new method to further reduce sampling error by shrinking once again traditional shrinkage estimators of the GMVP. First, we show analytically that the weights of any GMVP can be shrunk – within the framework of the ridge regression – towards the ones of the equally-weighted portfolio in order to reduce sampling error. Second, Monte Carlo simulations and empirical applications show that applying our methodology to the GMVP based on shrinkage estimators of the covariance matrix, leads to more stable portfolio weights, sharp decreases in portfolio turnovers, and often statistically lower (resp. higher) out-of-sample variances (resp. Sharpe ratios). These results illustrate that double shrinkage estimation of the GMVP can be beneficial for realistic small estimation sample sizes.  相似文献   

17.
《Quantitative Finance》2013,13(5):376-384
Abstract

Volatility plays an important role in derivatives pricing, asset allocation, and risk management, to name but a few areas. It is therefore crucial to make the utmost use of the scant information typically available in short time windows when estimating the volatility. We propose a volatility estimator using the high and the low information in addition to the close price, all of which are typically available to investors. The proposed estimator is based on a maximum likelihood approach. We present explicit formulae for the likelihood of the drift and volatility parameters when the underlying asset is assumed to follow a Brownian motion with constant drift and volatility. Our approach is to then maximize this likelihood to obtain the estimator of the volatility. While we present the method in the context of a Brownian motion, the general methodology is applicable whenever one can obtain the likelihood of the volatility parameter given the high, low and close information. We present simulations which indicate that our estimator achieves consistently better performance than existing estimators (that use the same information and assumptions) for simulated data. In addition, our simulations using real price data demonstrate that our method produces more stable estimates. We also consider the effects of quantized prices and discretized time.  相似文献   

18.
Considering the growing need for managing financial risk, Value-at-Risk (VaR) prediction and portfolio optimisation with a focus on VaR have taken up an important role in banking and finance. Motivated by recent results showing that the choice of VaR estimator does not crucially influence decision-making in certain practical applications (e.g. in investment rankings), this study analyses the important question of how asset allocation decisions are affected when alternative VaR estimation methodologies are used. Focusing on the most popular, successful and conceptually different conditional VaR estimation techniques (i.e. historical simulation, peak over threshold method and quantile regression) and the flexible portfolio model of Campbell et al. [J. Banking Finance. 2001, 25(9), 1789–1804], we show in an empirical example and in a simulation study that these methods tend to deliver similar asset weights. In other words, optimal portfolio allocations appear to be not very sensitive to the choice of VaR estimator. This finding, which is robust in a variety of distributional environments and pre-whitening settings, supports the notion that, depending on the specific application, simple standard methods (i.e. historical simulation) used by many commercial banks do not necessarily have to be replaced by more complex approaches (based on, e.g. extreme value theory).  相似文献   

19.
The use of improved covariance matrix estimators as an alternative to the sample estimator is considered an important approach for enhancing portfolio optimization. Here we empirically compare the performance of nine improved covariance estimation procedures using daily returns of 90 highly capitalized US stocks for the period 1997–2007. We find that the usefulness of covariance matrix estimators strongly depends on the ratio between the estimation period T and the number of stocks N, on the presence or absence of short selling, and on the performance metric considered. When short selling is allowed, several estimation methods achieve a realized risk that is significantly smaller than that obtained with the sample covariance method. This is particularly true when T/N is close to one. Moreover, many estimators reduce the fraction of negative portfolio weights, while little improvement is achieved in the degree of diversification. On the contrary, when short selling is not allowed and T?>?N, the considered methods are unable to outperform the sample covariance in terms of realized risk, but can give much more diversified portfolios than that obtained with the sample covariance. When T?<?N, the use of the sample covariance matrix and of the pseudo-inverse gives portfolios with very poor performance.  相似文献   

20.
Abstract

The Conditional Tail Expectation (CTE), also called Expected Shortfall or Tail-VaR, is a robust, convenient, practical, and coherent measure for quantifying financial risk exposure. The CTE is quickly becoming the preferred measure for statutory balance sheet valuation whenever real-world stochastic methods are used to set liability provisions. We look at some statistical properties of the methods that are commonly used to estimate the CTE and develop a simple formula for the variance of the CTE estimator that is valid in the large sample limit. We also show that the formula works well for finite sample sizes. Formula results are compared with sample values from realworld Monte Carlo simulations for some common loss distributions, including equity-linked annuities with investment guarantees, whole life insurance and operational risks. We develop the CTE variance formula in the general case using a system of biased weights and explore importance sampling, a form of variance reduction, as a way to improve the quality of the estimators for a given sample size. The paper closes with a discussion of practical applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号