首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Brown and Gibbons (1985) developed a theory of relative risk aversion estimation in terms of average market rates of return and the variance of market rates of return. However, the exact sampling distributions of the relative risk aversion estimators have not been derived. The main purpose of this paper is to derive the exact sampling distribution of an appropriate relative risk aversion estimator. First, we have derived theoretically the density of Brown and Gibbons' maximum likelihood estimator. It is shown that the centralt is not appropriate for testing the significance of estimated relative risk aversion distribution. Then we derived the minimum variance unbiased estimator by a linear transformation of the Brown and Gibbons' maximum likelihood estimator. The density function is neither a central nor a noncentralt distribution. The density function of this new distribution has been tabulated. There is an empirical example to illustrate the application of this new sampling distribution.  相似文献   

2.
This paper investigates the role of high-order moments in the estimation of conditional value at risk (VaR). We use the skewed generalized t distribution (SGT) with time-varying parameters to provide an accurate characterization of the tails of the standardized return distribution. We allow the high-order moments of the SGT density to depend on the past information set, and hence relax the conventional assumption in conditional VaR calculation that the distribution of standardized returns is iid. The maximum likelihood estimates show that the time-varying conditional volatility, skewness, tail-thickness, and peakedness parameters of the SGT density are statistically significant. The in-sample and out-of-sample performance results indicate that the conditional SGT-GARCH approach with autoregressive conditional skewness and kurtosis provides very accurate and robust estimates of the actual VaR thresholds.  相似文献   

3.
This paper outlines a general methodology for estimating the parameters of financial models commonly employed in the literature. A numerical Bayesian technique is utilised to obtain the posterior density of model parameters and functions thereof. Unlike maximum likelihood estimation, where inference is only justified in large samples, the Bayesian densities are exact for any sample size. A series of simulation studies are conducted to compare the properties of point estimates, the distribution of option and bond prices, and the power of specification tests under maximum likelihood and Bayesian methods. Results suggest that maximum–likelihood–based asymptotic distributions have poor finite–sampleproperties.  相似文献   

4.
Data insufficiency and reporting threshold are two main issues in operational risk modelling. When these conditions are present, maximum likelihood estimation (MLE) may produce very poor parameter estimates. In this study, we first investigate four methods to estimate the parameters of truncated distributions for small samples—MLE, expectation-maximization algorithm, penalized likelihood estimators, and Bayesian methods. Without any proper prior information, Jeffreys’ prior for truncated distributions is used. Based on a simulation study for the log-normal distribution, we find that the Bayesian method gives much more credible and reliable estimates than the MLE method. Finally, an application to the operational loss severity estimation using real data is conducted using the truncated log-normal and log-gamma distributions. With the Bayesian method, the loss distribution parameters and value-at-risk measure for every cell with loss data can be estimated separately for internal and external data. Moreover, confidence intervals for the Bayesian estimates are obtained via a bootstrap method.  相似文献   

5.
The quality of operational risk data sets suffers from missing or contaminated data points. This may lead to implausible characteristics of the estimates. Outliers, especially, can make a modeler's task difficult and can result in arbitrarily large capital charges. Robust statistics provides ways to deal with these problems as well as measures for the reliability of estimators. We show that using maximum likelihood estimation can be misleading and unreliable assuming typical operational risk severity distributions. The robustness of the estimators for the Generalized Pareto distribution, and the Weibull and Lognormal distributions is measured considering both global and local reliability, which are represented by the breakdown point and the influence function of the estimate.  相似文献   

6.
This paper shows how to represent a vector autoregression (VAR) in terms of the eigenvalues and eigenvectors of its companion matrix. This representation is used to impose the exact restrictions implied by the expectations hypothesis on the VAR for short and long term interest rates and to calculate the restricted maximum likelihood estimates. The first difference representation for short and long rates used by Sargent (1979) is shown to be inconsistent with the expectations hypothesis, but a VAR with two unit roots is constructed that satisfies the exact restrictions and leads to similar restricted estimates.  相似文献   

7.
The probability of informed trading (PIN) is a commonly used market microstructure measure for detecting the level of information asymmetry. Estimating PIN can be problematic due to corner solutions, local maxima and floating point exceptions (FPE). Yan and Zhang [J. Bank. Finance, 2012, 36, 454–467] show that whilst factorization can solve FPE, boundary solutions appear frequently in maximum likelihood estimation for PIN. A grid search initial value algorithm is suggested to overcome this problem. We present a faster method for reducing the likelihood of boundary solutions and local maxima based on hierarchical agglomerative clustering (HAC). We show that HAC can be used to determine an accurate and fast starting value approximation for PIN. This assists the maximum likelihood estimation process in both speed and accuracy.  相似文献   

8.
Historically, the normal variance model has been used to describe stock return distributions. This model is based on taking the conditional stock return distribution to be normal with its variance itself being a random variable. The form of the actual stock return distribution will depend on the distribution for the variance. In practice, the distributions chosen for the variance appear to be very limited. In this note, we derive a comprehensive collection of formulas for the actual stock return distribution, covering some sixteen flexible families. The corresponding estimation procedures are derived by the method of moments and the method of maximum likelihood. We feel that this work could serve as a useful reference and lead to improved modelling with respect to stock market returns.  相似文献   

9.
This paper develops a maximum likelihood estimation method for the deposit insurance pricing model of Duan, Moreau and Sealey (DMS) [J. Banking Financ. 19 (1995) 1091.]. A sample of 10 US banks is used to illustrate the estimation method. Our results are then compared to those obtained with the modified Ronn–Verma method used in DMS. Our findings reveal that the maximum likelihood method yields estimates for the deposit insurance value much larger than the ones based on the modified Ronn–Verma method. We conduct a Monte Carlo study to ascertain the performance of the maximum likelihood estimation method. The simulation results are clearly in favor of our proposed method.  相似文献   

10.
The stability of estimates is critical when applying advanced measurement approaches (AMA) such as loss distribution approach (LDA) for operational risk capital modeling. Recent studies have identified issues associated with capital estimates by applying the maximum likelihood estimation (MLE) method for truncated distributions: significant upward mean-bias, considerable uncertainty about the estimates, and non-robustness to both small and large losses. Although alternative estimation approaches have been proposed, there has not been any comprehensive study of how alternative approaches perform compared to the MLE method. This paper is the first comprehensive study on the performance of various potentially promising alternative approaches (including minimum distance approach, quantile distance approach, scaling-based bias correction, upward scaling of lower quantiles, and right-truncated distributions) as compared to MLE with regards to accuracy, precision and robustness. More importantly, based on the properties of each estimator, we propose a right-truncation with probability weighted least squares method, by combining the right-truncated distribution and minimizing a probability weighted distance (i.e., the quadratic upper-tail Anderson–Darling distance), and we find it significantly reduces the bias and volatility of capital estimates and improves the robustness of capital estimates to small losses near the threshold or moving the threshold, demonstrated by both simulation results and real data application.  相似文献   

11.
The accuracy of real estate indices: Repeat sale estimators   总被引:2,自引:2,他引:0  
Simulation techniques allow us to examine the behavior and accuracy of several repeat sales regression estimators used to construct real estate return indices. We show that the generalized least squares (GLS) method is the maximum likelihood estimator, and we show how estimation accuracy can be significantly improved through a Baysian approach. In addition, we introduce a biased estimation procedure based upon the James and Stein method to address the problems of multicollinearity common to the procedure.  相似文献   

12.
A rich variety of probability distributions has been proposed in the actuarial literature for fitting of insurance loss data. Examples include: lognormal, log-t, various versions of Pareto, loglogistic, Weibull, gamma and its variants, and generalized beta of the second kind distributions, among others. In this paper, we supplement the literature by adding the log-folded-normal and log-folded-t families. Shapes of the density function and key distributional properties of the ‘folded’ distributions are presented along with three methods for the estimation of parameters: method of maximum likelihood; method of moments; and method of trimmed moments. Further, large and small-sample properties of these estimators are studied in detail. Finally, we fit the newly proposed distributions to data which represent the total damage done by 827 fires in Norway for the year 1988. The fitted models are then employed in a few quantitative risk management examples, where point and interval estimates for several value-at-risk measures are calculated.  相似文献   

13.
This paper discusses how conditional heteroskedasticity models can be estimated efficiently without imposing strong distributional assumptions such as normality. Using the generalized method of moments (GMM) principle, we show that for a class of models with a symmetric conditional distribution, the GMM estimates obtained from the joint estimating equations corresponding to the conditional mean and variance of the model are efficient when the instruments are chosen optimally. A simple ARCH(1) model is used to illustrate the feasibility of the proposed estimation procedure.  相似文献   

14.
Abstract

Extract

While in some linear estimation problems the principle of unbiasedness can be said to be appropriate, we have just seen that in the present context we will have to appeal to other criteria. Let us first consider what we get from the maximum likelihood method. We do not claim any particular optimum property for this estimate of the risk distribution: it seems plausible however that one can prove a large sample result analogous to the classical result on maximum likelihood estimation.  相似文献   

15.
Maximum likelihood estimation of discretely observed diffusion processes is mostly hampered by the lack of a closed form solution of the transient density. It has recently been argued that a most generic remedy to this problem is the numerical solution of the pertinent Fokker–Planck (FP) or forward Kolmogorov equation. Here we expand existing work on univariate diffusions to higher dimensions. We find that in the bivariate and trivariate cases, a numerical solution of the FP equation via alternating direction finite difference schemes yields results surprisingly close to exact maximum likelihood in a number of test cases. After providing evidence for the efficiency of such a numerical approach, we illustrate its application for the estimation of a joint system of short-run and medium-run investor sentiment and asset price dynamics using German stock market data.  相似文献   

16.
Stochastic volatility (SV) models are theoretically more attractive than the GARCH type of models as it allows additional randomness. The classical SV models deduce a continuous probability distribution for volatility so that it does not admit a computable likelihood function. The estimation requires the use of Bayesian approach. A recent approach considers discrete stochastic autoregressive volatility models for a bounded and tractable likelihood function. Hence, a maximum likelihood estimation can be achieved. This paper proposes a general approach to link SV models under the physical probability measure, both continuous and discrete types, to their processes under a martingale measure. Doing so enables us to deduce the close-form expression for the VIX forecast for the both SV models and GARCH type models. We then carry out an empirical study to compare the performances of the continuous and discrete SV models using GARCH models as benchmark models.  相似文献   

17.
This paper derives exact formulas for retrieving risk neutral moments of future payoffs of any order from generic European-style option prices. It also provides an exact formula for retrieving the expected quadratic variation of the stock market implied by European option prices, which nowadays is used as an estimate of the implied volatility, and a formula approximating the jump component of this measure of variation. To implement the above formulas to discrete sets of option prices, the paper suggests a numerical procedure and provides upper bounds of its approximation errors. The performance of this procedure is evaluated through a simulation and an empirical exercise. Both of these exercises clearly indicate that the suggested numerical procedure can provide accurate estimates of the risk neutral moments, over different horizons ahead. These can be in turn employed to obtain accurate estimates of risk neutral densities and calculate option prices, efficiently, in a model-free manner. The paper also shows that, in contrast to the prevailing view, ignoring the jump component of the underlying asset can lead to seriously biased estimates of the new volatility index suggested by the Chicago Board Options Exchange.  相似文献   

18.
《Quantitative Finance》2013,13(3):163-172
Abstract

Support vector machines (SVMs) are a new nonparametric tool for regression estimation. We will use this tool to estimate the parameters of a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we will show that GARCH models can be estimated using SVMs and that such estimates have a higher predicting ability than those obtained via common ML methods.  相似文献   

19.
This paper studies the parameter estimation problem for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. Estimation is regarded as the principal challenge in applying these models since they were proposed by Barndorff-Nielsen and Shephard [J. R. Stat. Soc. Ser. B, 2001, 63(2), 167–241]. Most previous work has used a Bayesian paradigm, whereas we treat the problem in the framework of maximum likelihood estimation, applying gradient-based simulation optimization. A hidden Markov model is introduced to formulate the likelihood of observations; sequential Monte Carlo is applied to sample the hidden states from the posterior distribution; smooth perturbation analysis is used to deal with the discontinuities introduced by jumps in estimating the gradient. Numerical experiments indicate that the proposed gradient-based simulated maximum likelihood estimation approach provides an efficient alternative to current estimation methods.  相似文献   

20.
We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in most high-frequency measures of volatility, whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号