首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We report evidence that boundary solutions can cause a bias in the estimate of the probability of informed trading (PIN). We develop an algorithm to overcome this bias and use it to estimate PIN for nearly 80,000 stock-quarters between 1993 and 2004. We obtain two sets of PIN estimates by using the factorized likelihood functions in both [Easley et al., 2010] and [Lin and Ke, 2011], respectively. We find that the estimate based on the EHO factorization is systematically smaller than the estimate based on the LK factorization, meaning that there is a downward bias associated with the EHO factorization. In addition, we find that boundary solutions appear with a very high frequency when the LK factorization is used. Thus it is necessary to use the LK factorization together with the algorithm in this paper. At last, we document several interesting empirical properties of PIN.  相似文献   

2.
Brockman and Turtle [J. Finan. Econ., 2003, 67, 511–529] develop a barrier option framework to show that default barriers are significantly positive. Most implied barriers are typically larger than the book value of corporate liabilities. We show theoretically and empirically that this result is biased due to the approximation of the market value of corporate assets by the sum of the market value of equity and the book value of liabilities. This approximation leads to a significant overestimation of the default barrier. To eliminate this bias, we propose a maximum likelihood (ML) estimation approach to estimate the asset values, asset volatilities, and default barriers. The proposed framework is applied to empirically examine the default barriers of a large sample of industrial firms. This paper documents that default barriers are positive, but not very significant. In our sample, most of the estimated barriers are lower than the book values of corporate liabilities. In addition to the problem with the default barriers, we find significant biases on the estimation of the asset value and the asset volatility of Brockman and Turtle.  相似文献   

3.
This paper studies the parameter estimation problem for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. Estimation is regarded as the principal challenge in applying these models since they were proposed by Barndorff-Nielsen and Shephard [J. R. Stat. Soc. Ser. B, 2001, 63(2), 167–241]. Most previous work has used a Bayesian paradigm, whereas we treat the problem in the framework of maximum likelihood estimation, applying gradient-based simulation optimization. A hidden Markov model is introduced to formulate the likelihood of observations; sequential Monte Carlo is applied to sample the hidden states from the posterior distribution; smooth perturbation analysis is used to deal with the discontinuities introduced by jumps in estimating the gradient. Numerical experiments indicate that the proposed gradient-based simulated maximum likelihood estimation approach provides an efficient alternative to current estimation methods.  相似文献   

4.
Abstract

Extract

While in some linear estimation problems the principle of unbiasedness can be said to be appropriate, we have just seen that in the present context we will have to appeal to other criteria. Let us first consider what we get from the maximum likelihood method. We do not claim any particular optimum property for this estimate of the risk distribution: it seems plausible however that one can prove a large sample result analogous to the classical result on maximum likelihood estimation.  相似文献   

5.
《Quantitative Finance》2013,13(3):163-172
Abstract

Support vector machines (SVMs) are a new nonparametric tool for regression estimation. We will use this tool to estimate the parameters of a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we will show that GARCH models can be estimated using SVMs and that such estimates have a higher predicting ability than those obtained via common ML methods.  相似文献   

6.
Evolving volatility is a dominant feature observed in most financial time series and a key parameter used in option pricing and many other financial risk analyses. A number of methods for non-parametric scale estimation are reviewed and assessed with regard to the stylized features of financial time series. A new non-parametric procedure for estimating historical volatility is proposed based on local maximum likelihood estimation for the t-distribution. The performance of this procedure is assessed using simulated and real price data and is found to be the best among estimators we consider. We propose that it replaces the moving variance historical volatility estimator.  相似文献   

7.
Abstract

This case study illustrates the analysis of two possible regression models for bivariate claims data. Estimates or forecasts of loss distributions under these two models are developed using two methods of analysis: (1) maximum likelihood estimation and (2) the Bayesian method. These methods are applied to two data sets consisting of 24 and 1,500 paired observations, respectively. The Bayesian analyses are implemented using Markov chain Monte Carlo via WinBUGS, as discussed in Scollnik (2001). A comparison of the analyses reveals that forecasted total losses can be dramatically underestimated by the maximum likelihood estimation method because it ignores the inherent parameter uncertainty.  相似文献   

8.

Lejeune and Sarda (1992) and Jones (1993) introduced the principle of local linear estimation to nonparametric estimation of smooth densities on the positive real axes. This methodology results in the basic kernel smoother with Gasser and Müller (1979) type boundary kernels when estimating close to a boundary. This principle is extended to nonparametric multivariate density estimation with arbitrary boundary regions.  相似文献   

9.
Abstract

In [1] it was shown that under mild conditions, with probability approaching unity as n increases the likelihood function has a relative maximum in a fixed open interval centered at the true value of the parameter. This paper strengthens the result by having the length of the interval approach zero as n increases. An application is given.  相似文献   

10.
The estimation of the parameters of a continuous-time Markov chain from discrete-time observations, also known as the embedding problem for Markov chains, plays in particular an important role for the modeling of credit rating transitions. This missing data problem boils down to a latent variable setting and thus, maximum likelihood estimation is usually conducted using the expectation-maximization (EM) algorithm. We illustrate that the EM algorithm is likely to get stuck in local maxima of the likelihood function in this specific problem setting and adapt a stochastic approximation simulated annealing scheme (SASEM) as well as a genetic algorithm (GA) to combat this issue. Above that, our main contribution is to extend our method GA by a rejection sampling scheme, which allows one to derive stochastic monotone maximum likelihood estimates in order to obtain proper (non-crossing) multi-year probabilities of default. We advocate the use of this procedure as direct constrained optimization (of the likelihood function) will not be numerically stable due to the large number of side conditions. Furthermore, the monotonicity constraint enables one to combine structural knowledge of the ordinality of credit ratings with real-life data into a statistical estimator, which has a stabilizing effect on far off-diagonal generator matrix elements. We illustrate our methods by Standard and Poor’s credit rating data as well as a simulation study and benchmark our novel procedure against an already existing smoothing algorithm.  相似文献   

11.
This paper develops a maximum likelihood estimation method for the deposit insurance pricing model of Duan, Moreau and Sealey (DMS) [J. Banking Financ. 19 (1995) 1091.]. A sample of 10 US banks is used to illustrate the estimation method. Our results are then compared to those obtained with the modified Ronn–Verma method used in DMS. Our findings reveal that the maximum likelihood method yields estimates for the deposit insurance value much larger than the ones based on the modified Ronn–Verma method. We conduct a Monte Carlo study to ascertain the performance of the maximum likelihood estimation method. The simulation results are clearly in favor of our proposed method.  相似文献   

12.
This paper proposes a two-step methodology for Value-at-Risk prediction. The first step involves estimation of a GARCH model using quasi-maximum likelihood estimation and the second step uses model filtered returns with the skewed t distribution of Azzalini and Capitanio [J. R. Stat. Soc. B, 2003, 65, 367–389]. The predictive performance of this method is compared to the single-step joint estimation of the same data generating process, to the well-known GARCH-Evt model and to a comprehensive set of other market risk models. Backtesting results show that the proposed two-step method outperforms most benchmarks including the classical joint estimation method of same data generating process and it performs competitively with respect to the GARCH-Evt model. This paper recommends two robust models to risk managers of emerging market stock portfolios. Both models are estimated in two steps: the GJR-GARCH-Evt model and the two-step GARCH-St model proposed in this study.  相似文献   

13.
《Quantitative Finance》2013,13(5):376-384
Abstract

Volatility plays an important role in derivatives pricing, asset allocation, and risk management, to name but a few areas. It is therefore crucial to make the utmost use of the scant information typically available in short time windows when estimating the volatility. We propose a volatility estimator using the high and the low information in addition to the close price, all of which are typically available to investors. The proposed estimator is based on a maximum likelihood approach. We present explicit formulae for the likelihood of the drift and volatility parameters when the underlying asset is assumed to follow a Brownian motion with constant drift and volatility. Our approach is to then maximize this likelihood to obtain the estimator of the volatility. While we present the method in the context of a Brownian motion, the general methodology is applicable whenever one can obtain the likelihood of the volatility parameter given the high, low and close information. We present simulations which indicate that our estimator achieves consistently better performance than existing estimators (that use the same information and assumptions) for simulated data. In addition, our simulations using real price data demonstrate that our method produces more stable estimates. We also consider the effects of quantized prices and discretized time.  相似文献   

14.
15.
The fact that value shares outperform glamour shares in the long term has been known for over 50 years. Why then do glamour shares remain popular? The price-earnings (P/E) ratio was the first statistic documented to discriminate between the two. Using data for all US stocks since 1983, we find that glamour shares have a much greater tendency to change P/E decile than value shares. We use TreeAge decision tree software, which has not been applied to problems in finance before, to show that glamour investors cannot rationally expect any windfall as their company's P/E decile changes, whatever their horizon. We infer that glamour investors anchor on the initially high P/E value, underestimate the likelihood of change and are continually surprised. We also seek theoretical justification for why value shares tend to outperform glamour shares. No convincing arguments based on the efficient market hypothesis have been put forward to show that the outperformance of value shares might be due to their being fundamentally riskier. Here, we apply equations from option theory to show that value shares can indeed be expected to outperform glamour shares.  相似文献   

16.
Abstract

This paper examines the so-called 1/n investment puzzle that has been observed in defined contribution plans whereby some participants divide their contributions equally among the available asset classes. It has been argued that this is a very naive strategy since it contradicts the fundamental tenets of modern portfolio theory. We use simple arguments to show that this behavior is perhaps less naive than it at first appears. It is well known that the optimal portfolio weights in a mean-variance setting are extremely sensitive to estimation errors, especially those in the expected returns. We show that when we account for estimation error, the 1/n rule has some advantages in terms of robustness; we demonstrate this with numerical experiments. This rule can provide a risk-averse investor with protection against very bad outcomes.  相似文献   

17.
This paper provides new evidence concerning the probability of informed trading (PIN) and the PIN-return relationship. We take measures to overcome known estimation biases and improve the quality of quarterly PIN estimates. We use the average of a firm’s PIN estimates in four consecutive quarters to smooth out the effect of seasonal variation in trading activities. We find that when high-quality PIN estimates are used, the Fama–MacBeth cross-sectional regressions show stronger evidence for the positive PIN-return relationship than documented in the prior literature. This finding is robust to controls for the January, liquidity, and momentum effects.  相似文献   

18.
Option hedging is a critical risk management problem in finance. In the Black–Scholes model, it has been recognized that computing a hedging position from the sensitivity of the calibrated model option value function is inadequate in minimizing variance of the option hedge risk, as it fails to capture the model parameter dependence on the underlying price (see e.g. Coleman et al., J. Risk, 2001, 5(6), 63–89; Hull and White, J. Bank. Finance, 2017, 82, 180–190). In this paper, we demonstrate that this issue can exist generally when determining hedging position from the sensitivity of the option function, either calibrated from a parametric model from current option prices or estimated nonparametricaly from historical option prices. Consequently, the sensitivity of the estimated model option function typically does not minimize variance of the hedge risk, even instantaneously. We propose a data-driven approach to directly learn a hedging function from the market data by minimizing variance of the local hedge risk. Using the S&P 500 index daily option data for more than a decade ending in August 2015, we show that the proposed method outperforms the parametric minimum variance hedging method proposed in Hull and White [J. Bank. Finance, 2017, 82, 180–190], as well as minimum variance hedging corrective techniques based on stochastic volatility or local volatility models. Furthermore, we show that the proposed approach achieves significant gain over the implied BS delta hedging for weekly and monthly hedging.  相似文献   

19.
In illiquid markets, option traders may have an incentive to increase their portfolio value by using their impact on the dynamics of the underlying. We provide a mathematical framework to construct optimal trading strategies under market impact in a multi-player framework by introducing strategic interactions into the model of Almgren [Appl. Math. Finance, 2003, 10(1), 1–18]. Specifically, we consider a financial market model with several strategically interacting players who hold European contingent claims and whose trading decisions have an impact on the price evolution of the underlying. We establish the existence and uniqueness of equilibrium results for risk-neutral and CARA investors and show that the equilibrium dynamics can be characterized in terms of a coupled system of possibly nonlinear PDEs. For the linear cost function used by Almgren, we obtain a (semi) closed-form solution. Analysing this solution, we show how market manipulation can be reduced.  相似文献   

20.
《Quantitative Finance》2013,13(5):329-336
Abstract

Current Monte Carlo pricing engines may face a computational challenge for the Greeks, not only because of their time consumption but also their poor convergence when using a finite difference estimate with a brute force perturbation. The same story may apply to conditional expectation. In this short paper, following Fournié et al (Fournié E, Lasry J M, Lebuchoux J, Lions P L and Touzi N 1999 Finance Stochastics 3 391-412), we explain how to tackle this issue using Malliavin calculus to smoothen the payoff to estimate. We discuss the relationship with the likelihood ratio method of Broadie and Glasserman (Broadie M and Glasserman P 1996 Manag. Sci. 42 269-85). We show by numerical results the efficiency of this method and discuss when it is appropriate or not to use it. We see how to apply this method to the Heston model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号