首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance, 2014, 14, 1899–1922], Piterbarg [Risk, 2007, April, 84–89], Tataru and Fisher [Quantitative Development Group, Bloomberg Version 1, 2010], Lipton [Risk, 2002, 15, 61–66]—and the local volatility model incorporating stochastic interest rates—see e.g. Atlan [ArXiV preprint math/0604316, 2006], Piterbarg [Risk, 2006, 19, 66–71], Deelstra and Rayée [Appl. Math. Finance, 2012, 1–23], Ren et al. [Risk, 2007, 20, 138–143]. For both model classes a particular (conditional) expectation needs to be evaluated which cannot be extracted from the market and is expensive to compute. We establish accurate and ‘cheap to evaluate’ approximations for the expectations by means of the stochastic collocation method [SIAM J. Numer. Anal., 2007, 45, 1005–1034], [SIAM J. Sci. Comput., 2005, 27, 1118–1139], [Math. Models Methods Appl. Sci., 2012, 22, 1–33], [SIAM J. Numer. Anal., 2008, 46, 2309–2345], [J. Biomech. Eng., 2011, 133, 031001], which was recently applied in the financial context [Available at SSRN 2529691, 2014], [J. Comput. Finance, 2016, 20, 1–19], combined with standard regression techniques. Monte Carlo pricing experiments confirm that our method is highly accurate and fast.  相似文献   

2.
Abstract

We consider the three-factor double mean reverting (DMR) option pricing model of Gatheral [Consistent Modelling of SPX and VIX Options, 2008], a model which can be successfully calibrated to both VIX options and SPX options simultaneously. One drawback of this model is that calibration may be slow because no closed form solution for European options exists. In this paper, we apply modified versions of the second-order Monte Carlo scheme of Ninomiya and Victoir [Appl. Math. Finance, 2008, 15, 107–121], and compare these to the Euler–Maruyama scheme with full truncation of Lord et al. [Quant. Finance, 2010, 10(2), 177–194], demonstrating on the one hand that fast calibration of the DMR model is practical, and on the other that suitably modified Ninomiya–Victoir schemes are applicable to the simulation of much more complicated time-homogeneous models than may have been thought previously.  相似文献   

3.
Hai Lin 《Quantitative Finance》2018,18(9):1453-1470
This paper investigates the impact of tightened trading rules on the market efficiency and price discovery function of the Chinese stock index futures in 2015. The market efficiency and the price discovery of Chinese stock index futures do not deteriorate after these rule changes. Using variance ratio and spectral shape tests, we find that the Chinese index futures market becomes even more efficient after the tightened rules came into effect. Furthermore, by employing Schwarz and Szakmary [J. Futures Markets, 1994, 14(2), 147–167] and Hasbrouck [J. Finance, 1995, 50(4), 1175–1199] price discovery measures, we find that the price discovery function, to some extent, becomes better. This finding is consistent with Stein [J. Finance, 2009, 64(4), 1517–1548], who documents that regulations on leverage can be helpful in a bad market state, and Zhu [Rev. Financ. Stud., 2014, 27(3), 747–789.], who finds that price discovery can be improved with reduced liquidity. It also suggests that the new rules may effectively regulate the manipulation behaviour of the Chinese stock index futures market during a bad market state, and then positively affect its market efficiency and price discovery function.  相似文献   

4.
This study presents a set of closed-form exact solutions for pricing discretely sampled variance swaps and volatility swaps, based on the Heston stochastic volatility model with regime switching. In comparison with all the previous studies in the literature, this research, which obtains closed-form exact solutions for variance and volatility swaps with discrete sampling times, serves several purposes. (1) It verifies the degree of validity of Elliott et al.'s [Appl. Math. Finance, 2007, 14(1), 41–62] continuous-sampling-time approximation for variance and volatility swaps of relatively short sampling periods. (2) It examines the effect of ignoring regime switching on pricing variance and volatility swaps. (3) It contributes to bridging the gap between Zhu and Lian's [Math. Finance, 2011, 21(2), 233–256] approach and Elliott et al.'s framework. (4) Finally, it presents a semi-Monte-Carlo simulation for the pricing of other important realized variance based derivatives.  相似文献   

5.
Option hedging is a critical risk management problem in finance. In the Black–Scholes model, it has been recognized that computing a hedging position from the sensitivity of the calibrated model option value function is inadequate in minimizing variance of the option hedge risk, as it fails to capture the model parameter dependence on the underlying price (see e.g. Coleman et al., J. Risk, 2001, 5(6), 63–89; Hull and White, J. Bank. Finance, 2017, 82, 180–190). In this paper, we demonstrate that this issue can exist generally when determining hedging position from the sensitivity of the option function, either calibrated from a parametric model from current option prices or estimated nonparametricaly from historical option prices. Consequently, the sensitivity of the estimated model option function typically does not minimize variance of the hedge risk, even instantaneously. We propose a data-driven approach to directly learn a hedging function from the market data by minimizing variance of the local hedge risk. Using the S&P 500 index daily option data for more than a decade ending in August 2015, we show that the proposed method outperforms the parametric minimum variance hedging method proposed in Hull and White [J. Bank. Finance, 2017, 82, 180–190], as well as minimum variance hedging corrective techniques based on stochastic volatility or local volatility models. Furthermore, we show that the proposed approach achieves significant gain over the implied BS delta hedging for weekly and monthly hedging.  相似文献   

6.
Under the general affine jump-diffusion framework of Duffie et al. [Econometrica, 2000, 68, 1343–1376], this paper proposes an alternative pricing methodology for European-style forward start options that does not require any parallel optimization routine to ensure square integrability. Therefore, the proposed methodology is shown to possess a better accuracy–efficiency trade-off than the usual and more general approach initiated by Hong [Forward Smile and Derivative Pricing. Working paper, UBS, 2004] that is based on the knowledge of the forward characteristic function. Explicit pricing solutions are also offered under the nested jump-diffusion setting proposed by Bakshi et al. [J. Finance, 1997, 52, 2003–2049], which accommodates stochastic volatility and stochastic interest rates, and different integration schemes are numerically tested.  相似文献   

7.
The GARCH model has been very successful in capturing the serial correlation of asset return volatilities. As a result, applying the model to options pricing attracts a lot of attention. However, previous tree-based GARCH option pricing algorithms suffer from exponential running time, a cut-off maturity, inaccuracy, or some combination thereof. Specifically, this paper proves that the popular trinomial-tree option pricing algorithms of Ritchken and Trevor (Ritchken, P. and Trevor, R., Pricing options under generalized GARCH and stochastic volatility processes. J. Finance, , 54(1), 377–402.) and Cakici and Topyan (Cakici, N. and Topyan, K., The GARCH option pricing model: a lattice approach. J. Comput. Finance, , 3(4), 71–85.) explode exponentially when the number of partitions per day, n, exceeds a threshold determined by the GARCH parameters. Furthermore, when explosion happens, the tree cannot grow beyond a certain maturity date, making it unable to price derivatives with a longer maturity. As a result, the algorithms must be limited to using small n, which may have accuracy problems. The paper presents an alternative trinomial-tree GARCH option pricing algorithm. This algorithm provably does not have the short-maturity problem. Furthermore, the tree-size growth is guaranteed to be quadratic if n is less than a threshold easily determined by the model parameters. This level of efficiency makes the proposed algorithm practical. The surprising finding for the first time places a tree-based GARCH option pricing algorithm in the same complexity class as binomial trees under the Black–Scholes model. Extensive numerical evaluation is conducted to confirm the analytical results and the numerical accuracy of the proposed algorithm. Of independent interest is a simple and efficient technique to calculate the transition probabilities of a multinomial tree using generating functions.  相似文献   

8.
The exploration of the mean-reversion of commodity prices is important for inventory management, inflation forecasting and contingent claim pricing. Bessembinder et al. [J. Finance, 1995, 50, 361–375] document the mean-reversion of commodity spot prices using futures term structure data; however, mean-reversion to a constant level is rejected in nearly all studies using historical spot price time series. This indicates that the spot prices revert to a stochastic long-run mean. Recognizing this, I propose a reduced-form model with the stochastic long-run mean as a separate factor. This model fits the futures dynamics better than do classical models such as the Gibson–Schwartz [J. Finance, 1990, 45, 959–976] model and the Casassus–Collin-Dufresne [J. Finance, 2005, 60, 2283–2331] model with a constant interest rate. An application for option pricing is also presented in this paper.  相似文献   

9.
This paper considers the problem of pricing American options when the dynamics of the underlying are driven by both stochastic volatility following a square-root process as used by Heston [Rev. Financial Stud., 1993, 6, 327–343], and by a Poisson jump process as introduced by Merton [J. Financial Econ., 1976, 3, 125–144]. Probability arguments are invoked to find a representation of the solution in terms of expectations over the joint distribution of the underlying process. A combination of Fourier transform in the log stock price and Laplace transform in the volatility is then applied to find the transition probability density function of the underlying process. It turns out that the price is given by an integral dependent upon the early exercise surface, for which a corresponding integral equation is obtained. The solution generalizes in an intuitive way the structure of the solution to the corresponding European option pricing problem obtained by Scott [Math. Finance, 1997, 7(4), 413–426], but here in the case of a call option and constant interest rates.  相似文献   

10.
Many empirical studies have shown that financial asset returns do not always exhibit Gaussian distributions, for example hedge fund returns. The introduction of the family of Johnson distributions allows a better fit to empirical financial data. Additionally, this class can be extended to a quite general family of distributions by considering all possible regular transformations of the standard Gaussian distribution. In this framework, we consider the portfolio optimal positioning problem, which has been first addressed by Brennan and Solanki [J. Financial Quant. Anal., 1981, 16, 279–300], Leland [J. Finance, 1980, 35, 581–594] and further developed by Carr and Madan [Quant. Finance, 2001, 1, 9–37] and Prigent [Generalized option based portfolio insurance. Working Paper, THEMA, University of Cergy-Pontoise, 2006]. As a by-product, we introduce the notion of Johnson stochastic processes. We determine and analyse the optimal portfolio for log return having Johnson distributions. The solution is characterized for arbitrary utility functions and illustrated in particular for a CRRA utility. Our findings show how the profiles of financial structured products must be selected when taking account of non Gaussian log-returns.  相似文献   

11.
We study the skewness premium (SK) introduced by Bates [J. Finance, 1991, 46(3), 1009–1044] in a general context using Lévy processes. Under a symmetry condition, Fajardo and Mordecki [Quant. Finance, 2006, 6(3), 219–227] obtained that SK is given by Bates' x% rule. In this paper, we study SK in the absence of that symmetry condition. More exactly, we derive sufficient conditions for the excess of SK to be positive or negative, in terms of the characteristic triplet of the Lévy process under a risk-neutral measure.  相似文献   

12.
We apply the bootstrap technique proposed by Kosowski et al. [J. Finance, 2006, 61, 2551–2595] in conjunction with Carhart's [J. Finance, 1997, 52, 57–82] unconditional and Ferson and Schadt's [J. Finance, 1996, 51, 425–461] conditional four-factor models of performance to examine whether the performances of enhanced-return index funds over the 1996 to 2007 period are based on luck or superior ‘enhancing’ skills. The advantages of using the bootstrap to rank fund performance are many. It eliminates the need to specify the exact shape of the distribution from which returns are drawn and does not require estimating correlations between portfolio returns. It also eliminates the need to explicitly control for potential ‘data snooping’ biases that arise from an ex-post sort. Our results show evidence of enhanced-return index funds with positive and significant alphas after controlling for luck and sampling variability. The results are robust to both stock-only and derivative-enhanced index funds, although the spread of cross-sectional alphas for derivative-enhanced funds is slightly more pronounced. The study also examines various sub-periods within the sample horizon.  相似文献   

13.
Motivated by the practical challenge in monitoring the performance of a large number of algorithmic trading orders, this paper provides a methodology that leads to automatic discovery of causes that lie behind poor trading performance. It also gives theoretical foundations to a generic framework for real-time trading analysis. The common acronym for investigating the causes of bad and good performance of trading is transaction cost analysis Rosenthal [Performance Metrics for Algorithmic Traders, 2009]). Automated algorithms take care of most of the traded flows on electronic markets (more than 70% in the US, 45% in Europe and 35% in Japan in 2012). Academic literature provides different ways to formalize these algorithms and show how optimal they can be from a mean-variance (like in Almgren and Chriss [J. Risk, 2000, 3(2), 5–39]), a stochastic control (e.g. Guéant et al. [Math. Financ. Econ., 2013, 7(4), 477–507]), an impulse control (see Bouchard et al. [SIAM J. Financ. Math., 2011, 2(1), 404–438]) or a statistical learning (as used in Laruelle et al. [Math. Financ. Econ., 2013, 7(3), 359–403]) viewpoint. This paper is agnostic about the way the algorithm has been built and provides a theoretical formalism to identify in real-time the market conditions that influenced its efficiency or inefficiency. For a given set of characteristics describing the market context, selected by a practitioner, we first show how a set of additional derived explanatory factors, called anomaly detectors, can be created for each market order (following for instance Cristianini and Shawe-Taylor [An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, 2000]). We then will present an online methodology to quantify how this extended set of factors, at any given time, predicts (i.e. have influence, in the sense of predictive power or information defined in Basseville and Nikiforov [Detection of Abrupt Changes: Theory and Application, 1993], Shannon [Bell Syst. Tech. J., 1948, 27, 379–423] and Alkoot and Kittler [Pattern Recogn. Lett., 1999, 20(11), 1361–1369]) which of the orders are underperforming while calculating the predictive power of this explanatory factor set. Armed with this information, which we call influence analysis, we intend to empower the order monitoring user to take appropriate action on any affected orders by re-calibrating the trading algorithms working the order through new parameters, pausing their execution or taking over more direct trading control. Also we intend that use of this method can be taken advantage of to automatically adjust their trading action in the post trade analysis of algorithms.  相似文献   

14.
High-order discretization schemes of SDEs using free Lie algebra-valued random variables are introduced by Kusuoka [Adv. Math. Econ., 2004, 5, 69–83], [Adv. Math. Econ., 2013, 17, 71–120], Lyons–Victoir [Proc. R. Soc. Lond. Ser. A Math. Phys. Sci., 2004, 460, 169–198], Ninomiya–Victoir [Appl. Math. Finance, 2008, 15, 107–121] and Ninomiya–Ninomiya [Finance Stochast., 2009, 13, 415–443]. These schemes are called KLNV methods. They involve solving the flows of vector fields associated with SDEs and it is usually done by numerical methods. The authors have found a special Lie algebraic structure on the vector fields in the major financial diffusion models. Using this structure, we can solve the flows associated with vector fields analytically and efficiently. Numerical examples show that our method reduces the computation time drastically.  相似文献   

15.
Rejoinder     
This paper provides a formula for a commonly used measure of the economic value of asset return predictability. In doing this, we find that there is a strong connection between this measure and a traditional statistical measure of predictive quality. In particular, we demonstrate that the maximum amount an investor is willing to pay for predictability knowledge (the performance fee) is a simple transformation of the R 2 statistic associated with the predictor equation. We illustrate the use of these results with an application to the Ibbotson US bond and equity data (and a set of pertinent predictors), and via application to the results published in Fama and French [1988. Dividend yields and expected stock returns. Journal of Financial Economics 22: 3–25], Balvers, Cosimano, and McDonald [1990. Predicting stock returns in an efficient market. Journal of Finance 45: 1109–28], Lettau and Ludvigson [2001. Consumption, aggregate wealth and expected stock returns. Journal of Finance 56: 815–49], and Santa-Clara and Yan [2010. Crashes, volatility, and the equity premium: Lessons from S&;P 500 options. Review of Economics and Statistics 92: 435–51].  相似文献   

16.
The rough Bergomi model, introduced by Bayer et al. [Quant. Finance, 2016, 16(6), 887–904], is one of the recent rough volatility models that are consistent with the stylised fact of implied volatility surfaces being essentially time-invariant, and are able to capture the term structure of skew observed in equity markets. In the absence of analytical European option pricing methods for the model, we focus on reducing the runtime-adjusted variance of Monte Carlo implied volatilities, thereby contributing to the model’s calibration by simulation. We employ a novel composition of variance reduction methods, immediately applicable to any conditionally log-normal stochastic volatility model. Assuming one targets implied volatility estimates with a given degree of confidence, thus calibration RMSE, the results we demonstrate equate to significant runtime reductions—roughly 20 times on average, across different correlation regimes.  相似文献   

17.
Microscopic simulation models are often evaluated based on visual inspection of the results. This paper presents formal econometric techniques to compare microscopic simulation (MS) models with real-life data. A related result is a methodology to compare different MS models with each other. For this purpose, possible parameters of interest, such as mean returns, or autocorrelation patterns, are classified and characterized. For each class of characteristics, the appropriate techniques are presented. We illustrate the methodology by comparing the MS model developed by He and Li [J. Econ. Dynam. Control, 2007, 31, 3396–3426, Quant. Finance, 2008, 8, 59–79] with actual data.  相似文献   

18.
The rough Bergomi (rBergomi) model, introduced recently in Bayer et al. [Pricing under rough volatility. Quant. Finance, 2016, 16(6), 887–904], is a promising rough volatility model in quantitative finance. It is a parsimonious model depending on only three parameters, and yet remarkably fits empirical implied volatility surfaces. In the absence of analytical European option pricing methods for the model, and due to the non-Markovian nature of the fractional driver, the prevalent option is to use the Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a time-consuming task. To overcome this issue, we have designed a novel, hierarchical approach, based on: (i) adaptive sparse grids quadrature (ASGQ), and (ii) quasi-Monte Carlo (QMC). Both techniques are coupled with a Brownian bridge construction and a Richardson extrapolation on the weak error. By uncovering the available regularity, our hierarchical methods demonstrate substantial computational gains with respect to the standard MC method. They reach a sufficiently small relative error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e. to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model.  相似文献   

19.
Yue Qiu  Tian Xie 《Quantitative Finance》2013,13(10):1673-1687
Empirical evidence has demonstrated that certain factors in asset pricing models are more important than others for explaining specific portfolio returns. We propose a technique that evaluates the factors included in popular linear asset pricing models. Our method has the advantage of simultaneously ranking the relative importance of those pricing factors through comparing their model weights. As an empirical verification, we apply our method to portfolios formed following Fama and French [A five-factor asset pricing model. J. Financ. Econ., 2015, 116, 1–22] and demonstrate that models accommodated to our factor rankings do improve their explanatory power in both in-sample and out-of-sample analyses.  相似文献   

20.
This paper tests the effects of exchange rate and inflation risk factors on asset pricing in the European Union (EU) stock markets. This investigation was motivated by the results of Vassalou [J. Int. Money Finance, 2000, 19, 433–470] showing that both exchange rate and foreign inflation are generally priced in equity returns, and it studies the opportunity of evaluating the causality between these sources of risk after the elimination of the EU currency risks because of the adoption of the single currency. Our results show that both exchange rate and inflation risks are significantly priced in the pre- and post-euro periods. Moreover, the sizes of exchange rate and inflation risk premiums are economically significant in the pre- and post-euro periods. Futhermore, the UK and excluding-UK inflation risk premiums explain, in part, our evidence concerning a large EUR/GBP exchange rate risk premium and the existence of an economically significant domestic non-diversifiable risk after euro adoption. Hence overlooking inflation risk factors can produce an under/overestimation of the currency premiums and a miscalculation of the degree of integration of stock markets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号