首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 165 毫秒
1.
Numerical integration methods for stochastic volatility models in financial markets are discussed. We concentrate on two classes of stochastic volatility models where the volatility is either directly given by a mean-reverting CEV process or as a transformed Ornstein–Uhlenbeck process. For the latter, we introduce a new model based on a simple hyperbolic transformation. Various numerical methods for integrating mean-reverting CEV processes are analysed and compared with respect to positivity preservation and efficiency. Moreover, we develop a simple and robust integration scheme for the two-dimensional system using the strong convergence behaviour as an indicator for the approximation quality. This method, which we refer to as the IJK (137) scheme, is applicable to all types of stochastic volatility models and can be employed as a drop-in replacement for the standard log-Euler procedure.  相似文献   

2.
This paper proposes a new analytical approximation scheme for the representation of the forward–backward stochastic differential equations (FBSDEs) of Ma and Zhang (Ann Appl Probab, 2002). In particular, we obtain an error estimate for the scheme applying Malliavin calculus method for the forward SDEs combined with the Picard iteration scheme for the BSDEs. We also show numerical examples for pricing option with counterparty risk under local and stochastic volatility models, where the credit value adjustment is taken into account.  相似文献   

3.
For financial risk management it is of vital interest to have good estimates for the correlations between the stocks. It has been found that the correlations obtained from historical data are covered by a considerable amount of noise, which leads to a substantial error in the estimation of the portfolio risk. A method to suppress this noise is power mapping. It raises the absolute value of each matrix element to a power q while preserving the sign. In this paper we use the Markowitz portfolio optimization as a criterion for the optimal value of q and find a K/T dependence, where K is the portfolio size and T the length of the time series. Both in numerical simulations and for real market data we find that power mapping leads to portfolios with considerably reduced risk. It compares well with another noise reduction method based on spectral filtering. A combination of both methods yields the best results.  相似文献   

4.
We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes the asymptotics of the mean square error of the hybrid scheme, and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments, where we examine the finite-sample properties of an estimator of the roughness parameter of a Brownian semistationary process and study Monte Carlo option pricing in the rough Bergomi model of Bayer et al. (Quant. Finance 16:887–904, 2016), respectively.  相似文献   

5.
Abstract

Dufresne et al. (1991) introduced a general risk model defined as the limit of compound Poisson processes. Such a model is either a compound Poisson process itself or a process with an infinite number of small jumps. Later, in a series of now classical papers, the joint distribution of the time of ruin, the surplus before ruin, and the deficit at ruin was studied (Gerber and Shiu 1997, 1998a, 1998b; Gerber and Landry 1998). These works use the classical and the perturbed risk models and hint that the results can be extended to gamma and inverse Gaussian risk processes.

In this paper we work out this extension to a generalized risk model driven by a nondecreasing Lévy process. Unlike the classical case that models the individual claim size distribution and obtains from it the aggregate claims distribution, here the aggregate claims distribution is known in closed form. It is simply the one-dimensional distribution of a subordinator. Embedded in this wide family of risk models we find the gamma, inverse Gaussian, and generalized inverse Gaussian processes. Expressions for the Gerber-Shiu function are given in some of these special cases, and numerical illustrations are provided.  相似文献   

6.
Many empirical studies suggest that the distribution of risk factors has heavy tails. One always assumes that the underlying risk factors follow a multivariate normal distribution that is a assumption in conflict with empirical evidence. We consider a multivariate t distribution for capturing the heavy tails and a quadratic function of the changes is generally used in the risk factor for a non-linear asset. Although Monte Carlo analysis is by far the most powerful method to evaluate a portfolio Value-at-Risk (VaR), a major drawback of this method is that it is computationally demanding. In this paper, we first transform the assets into the risk on the returns by using a quadratic approximation for the portfolio. Second, we model the return’s risk factors by using a multivariate normal as well as a multivariate t distribution. Then we provide a bootstrap algorithm with importance resampling and develop the Laplace method to improve the efficiency of simulation, to estimate the portfolio loss probability and evaluate the portfolio VaR. It is a very powerful tool that propose importance sampling to reduce the number of random number generators in the bootstrap setting. In the simulation study and sensitivity analysis of the bootstrap method, we observe that the estimate for the quantile and tail probability with importance resampling is more efficient than the naive Monte Carlo method. We also note that the estimates of the quantile and the tail probability are not sensitive to the estimated parameters for the multivariate normal and the multivariate t distribution. The research of Shih-Kuei Lin was partially supported by the National Science Council under grants NSC 93-2146-H-259-023. The research of Cheng-Der Fuh was partially supported by the National Science Council under grants NSC 94-2118-M-001-028.  相似文献   

7.
Today, better numerical approximations are required for multi-dimensional SDEs to improve on the poor performance of the standard Monte Carlo pricing method. With this aim in mind, this paper presents a method (MSL-MC) to price exotic options using multi-dimensional SDEs (e.g. stochastic volatility models). Usually, it is the weak convergence property of numerical discretizations that is most important, because, in financial applications, one is mostly concerned with the accurate estimation of expected payoffs. However, in the recently developed Multilevel Monte Carlo path simulation method (ML-MC), the strong convergence property plays a crucial role. We present a modification to the ML-MC algorithm that can be used to achieve better savings. To illustrate these, various examples of exotic options are given using a wide variety of payoffs, stochastic volatility models and the new Multischeme Multilevel Monte Carlo method (MSL-MC). For standard payoffs, both European and Digital options are presented. Examples are also given for complex payoffs, such as combinations of European options (Butterfly Spread, Strip and Strap options). Finally, for path-dependent payoffs, both Asian and Variance Swap options are demonstrated. This research shows how the use of stochastic volatility models and the θ scheme can improve the convergence of the MSL-MC so that the computational cost to achieve an accuracy of O(ε) is reduced from O?3) to O?2) for a payoff under global and non-global Lipschitz conditions.  相似文献   

8.
Abstract

The aim of this paper is to analyse two functions that are of general interest in the collective risk theory, namely F, the distribution function of the total amount of claims, and II, the Stop Loss premium. Section 2 presents certain basic formulae. Sections 17-18 present five claim distributions. Corresponding to these claim distributions, the functions F and II were calculated under various assumptions as to the distribution of the number of claims. These calculations were performed on an electronic computer and the numerical method used for this purpose is presented in sections 9, 19 and 20 under the name of the C-method which method has the advantage of furnishing upper and lower limits of the quantities under estimation. The means of these limits, in the following regarded as the “exact” results, are given in Tables 4-20. Sections 11-16 present certain approximation methods. The N-method of section 11 is an Edgeworth expansion, while the G-method given in section 12 is an approximation by a Pearson type III curve. The methods presented in sections 13-16, and denoted AI-A4, are all applications and modifications of the Esscher method. These approximation methods have been applied for the calculation of F and II in the cases mentioned above in which “exact” results were obtained. The results are given in Tables 4-20. The object of this investigation was to obtain information as to the precision of the approximation methods in question, and to compare their relative merits. These results arc discussed in sections 21-24.  相似文献   

9.
In the framework of collective risk theory, we consider a compound Poisson risk model for the surplus process where the process (and hence ruin) can only be observed at random observation times. For Erlang(n) distributed inter-observation times, explicit expressions for the discounted penalty function at ruin are derived. The resulting model contains both the usual continuous-time and the discrete-time risk model as limiting cases, and can be used as an effective approximation scheme for the latter. Numerical examples are given that illustrate the effect of random observation times on various ruin-related quantities.  相似文献   

10.
Abstract

This paper extends a target-based model of income drawdown developed in Gerrard et al. (Insurance: Mathematics and Economics 35: 321–342 [2006]) (GHV) for the distribution phase of a defined contribution pension scheme. The optimal investment strategy of the pension fund and the optimal drawdown are found using linear-quadratic optimization, which minimizes the deviation of the fund and the drawdown from prescribed targets. The GHV model is modified by nondimensionalizing the loss function, so that there is a relative choice between outcomes.

Using this model, three classes of target are studied. Endogenous deterministic targets are suggested from the form of the optimal controls, while exogenous deterministic targets can be stated without knowledge of the optimization problem. The third class of stochastic targets is similar to recent annuity products, which incorporate investment risk. Each scheme represents a trade-off between investment risk and return, and this is illustrated by numerical simulation with reference to a canonical example. A particularly attractive form of income drawdown is given by an implied rate of return target. This yields a reasonable investment strategy and a robust consumption profile with age. In addition, it can be easily explained to pension scheme members.  相似文献   

11.
We develop a maximum penalized quasi-likelihood estimator for estimating in a non-parametric way the diffusion function of a diffusion process, as an alternative to more traditional kernel-based estimators. After developing a numerical scheme for computing the maximizer of the penalized maximum quasi-likelihood function, we study the asymptotic properties of our estimator by way of simulation. Under the assumption that overnight London Interbank Offered Rates (LIBOR), the USD/EUR, USD/GBP, JPY/USD, and EUR/USD nominal exchange rates, and the 1-month, 3-month Treasury bill yields, and 30-year Treasury bond yields are generated by diffusion processes, we use our numerical scheme to estimate the diffusion function.  相似文献   

12.
This paper will demonstrate how European and American option prices can be computed under the jump-diffusion model using the radial basis function (RBF) interpolation scheme. The RBF interpolation scheme is demonstrated by solving an option pricing formula, a one-dimensional partial integro-differential equation (PIDE). We select the cubic spline radial basis function and adopt a simple numerical algorithm (Briani et al. in Calcolo 44:33–57, 2007) to establish a finite computational range for the improper integral of the PIDE. This algorithm reduces the truncation error of approximating the improper integral. As a result, we are able to achieve a higher approximation accuracy of the integral with the application of any quadrature. Moreover, we a numerical technique termed cubic spline factorisation (Bos and Salkauskas in J Approx Theory 51:81–88, 1987) to solve the inversion of an ill-conditioned RBF interpolant, which is a well-known research problem in the RBF field. Finally, our numerical experiments show that in the European case, our RBF-interpolation solution is second-order accurate for spatial variables, while in the American case, it is second-order accurate for spatial variables and first-order accurate for time variables.  相似文献   

13.
A rich variety of probability distributions has been proposed in the actuarial literature for fitting of insurance loss data. Examples include: lognormal, log-t, various versions of Pareto, loglogistic, Weibull, gamma and its variants, and generalized beta of the second kind distributions, among others. In this paper, we supplement the literature by adding the log-folded-normal and log-folded-t families. Shapes of the density function and key distributional properties of the ‘folded’ distributions are presented along with three methods for the estimation of parameters: method of maximum likelihood; method of moments; and method of trimmed moments. Further, large and small-sample properties of these estimators are studied in detail. Finally, we fit the newly proposed distributions to data which represent the total damage done by 827 fires in Norway for the year 1988. The fitted models are then employed in a few quantitative risk management examples, where point and interval estimates for several value-at-risk measures are calculated.  相似文献   

14.
Revisiting the framework of (Barillas, Francisco, and Jay Shanken, 2018, Comparing asset pricing models, The Journal of Finance 73, 715–754). BS henceforth, we show that the Bayesian marginal likelihood-based model comparison method in that paper is unsound : the priors on the nuisance parameters across models must satisfy a change of variable property for densities that is violated by the Jeffreys priors used in the BS method. Extensive simulation exercises confirm that the BS method performs unsatisfactorily. We derive a new class of improper priors on the nuisance parameters, starting from a single improper prior, which leads to valid marginal likelihoods and model comparisons. The performance of our marginal likelihoods is significantly better, allowing for reliable Bayesian work on which factors are risk factors in asset pricing models.  相似文献   

15.
Safety is one of the most important issues in modern industrial plants and industrial activities. The Safety Engineering role is to ensure acceptable safety levels of production systems, not only to respect local laws and regulations, but also to improve production efficiency and to reduce manufacturing costs. For these reasons, the choice of a proper model for risk assessment is crucial. In this context, the present research aims to propose a new method, called Total Efficient Risk Priority Number (TERPN), able to classify risks and identify corrective actions in order to obtain the highest risk reduction with the lowest cost. The main scope is to suggest a simple, but suitable model for ranking risks in a company, to reach the maximum effectiveness of prevention and protection strategies. The TERPN method is an integration of the popular Failure Mode Effect and Criticality Analysis (FMECA) with other important factors in risk assessment.  相似文献   

16.
Credibility theory is a statistical tool to calculate the premium for the next period based on past claims experience and the manual rate. Each contract is characterized by a risk parameter. A phase-type (or PH) random variable, which is defined as the time until absorption in a continuous-time Markov chain, is fully characterized by two sets of parameters from that Markov chain: the initial probability vector and transition intensity matrix. In this article, we identify an interpretable univariate risk parameter from amongst the many candidate parameters, by means of uniformization. The resulting density form is then expressed as an infinite mixture of Erlang distributions. These results are used to obtain a tractable likelihood function by a recursive formula. Then the best estimator for the next premium, i.e. the Bayesian premium, as well as its approximation by the Bühlmann credibility premium are calculated. Finally, actuarial calculations for the Bühlmann and Bayesian premiums are investigated in the context of a gamma prior, and illustrated by simulated data in a series of examples.  相似文献   

17.
We apply the multilevel Monte Carlo method for option pricing problems using exponential Lévy models with a uniform timestep discretisation. For lookback and barrier options, we derive estimates of the convergence rate of the error introduced by the discrete monitoring of the running supremum of a broad class of Lévy processes. We then use these to obtain upper bounds on the multilevel Monte Carlo variance convergence rate for the variance gamma, NIG and \(\alpha\)-stable processes. We also provide an analysis of a trapezoidal approximation for Asian options. Our method is illustrated by numerical experiments.  相似文献   

18.
Guarantees embedded variable annuity contracts exhibit option-like payoff features and the pricing of such instruments naturally leads to risk neutral valuation techniques. This paper considers the pricing of two types of guarantees; namely, the Guaranteed Minimum Maturity Benefit and the Guaranteed Minimum Death Benefit riders written on several underlying assets whose dynamics are given by affine stochastic processes. Within the standard affine framework for the underlying mortality risk, stochastic volatility and correlation risk, we develop the key ingredients to perform the pricing of such guarantees. The model implies that the corresponding characteristic function for the state variables admits a closed form expression. We illustrate the methodology for two possible payoffs for the guarantees leading to prices that can be obtained through numerical integration. Using typical values for the parameters, an implementation of the model is provided and underlines the significant impact of the assets’ correlation structure on the guarantee prices.  相似文献   

19.
We consider the problem of maximizing the expected utility of the terminal wealth of a portfolio in a continuous-time pure jump market with general utility function. This leads to an optimal control problem for piecewise deterministic Markov processes. Using an embedding procedure we solve the problem by looking at a discrete-time contracting Markov decision process. Our aim is to show that this point of view has a number of advantages, in particular as far as computational aspects are concerned. We characterize the value function as the unique fixed point of the dynamic programming operator and prove the existence of optimal portfolios. Moreover, we show that value iteration as well as Howard’s policy improvement algorithm works. Finally, we give error bounds when the utility function is approximated and when we discretize the state space. A numerical example is presented and our approach is compared to the approximating Markov chain method.   相似文献   

20.
In this paper we consider the problem of hedging an arithmetic Asian option with discrete monitoring in an exponential Lévy model by deriving backward recursive integrals for the price sensitivities of the option. The procedure is applied to the analysis of the performance of the delta and delta–gamma hedges in an incomplete market; particular attention is paid to the hedging error and the impact of model error on the quality of the chosen hedging strategy. The numerical analysis shows the impact of jump risk on the hedging error of the option position, and the importance of including traded options in the hedging portfolio for the reduction of this risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号