首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This paper studies the parameter estimation problem for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. Estimation is regarded as the principal challenge in applying these models since they were proposed by Barndorff-Nielsen and Shephard [J. R. Stat. Soc. Ser. B, 2001, 63(2), 167–241]. Most previous work has used a Bayesian paradigm, whereas we treat the problem in the framework of maximum likelihood estimation, applying gradient-based simulation optimization. A hidden Markov model is introduced to formulate the likelihood of observations; sequential Monte Carlo is applied to sample the hidden states from the posterior distribution; smooth perturbation analysis is used to deal with the discontinuities introduced by jumps in estimating the gradient. Numerical experiments indicate that the proposed gradient-based simulated maximum likelihood estimation approach provides an efficient alternative to current estimation methods.  相似文献   

2.
《Quantitative Finance》2013,13(3):163-172
Abstract

Support vector machines (SVMs) are a new nonparametric tool for regression estimation. We will use this tool to estimate the parameters of a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we will show that GARCH models can be estimated using SVMs and that such estimates have a higher predicting ability than those obtained via common ML methods.  相似文献   

3.
Data insufficiency and reporting threshold are two main issues in operational risk modelling. When these conditions are present, maximum likelihood estimation (MLE) may produce very poor parameter estimates. In this study, we first investigate four methods to estimate the parameters of truncated distributions for small samples—MLE, expectation-maximization algorithm, penalized likelihood estimators, and Bayesian methods. Without any proper prior information, Jeffreys’ prior for truncated distributions is used. Based on a simulation study for the log-normal distribution, we find that the Bayesian method gives much more credible and reliable estimates than the MLE method. Finally, an application to the operational loss severity estimation using real data is conducted using the truncated log-normal and log-gamma distributions. With the Bayesian method, the loss distribution parameters and value-at-risk measure for every cell with loss data can be estimated separately for internal and external data. Moreover, confidence intervals for the Bayesian estimates are obtained via a bootstrap method.  相似文献   

4.
This paper proposes a two-step methodology for Value-at-Risk prediction. The first step involves estimation of a GARCH model using quasi-maximum likelihood estimation and the second step uses model filtered returns with the skewed t distribution of Azzalini and Capitanio [J. R. Stat. Soc. B, 2003, 65, 367–389]. The predictive performance of this method is compared to the single-step joint estimation of the same data generating process, to the well-known GARCH-Evt model and to a comprehensive set of other market risk models. Backtesting results show that the proposed two-step method outperforms most benchmarks including the classical joint estimation method of same data generating process and it performs competitively with respect to the GARCH-Evt model. This paper recommends two robust models to risk managers of emerging market stock portfolios. Both models are estimated in two steps: the GJR-GARCH-Evt model and the two-step GARCH-St model proposed in this study.  相似文献   

5.
This paper outlines a general methodology for estimating the parameters of financial models commonly employed in the literature. A numerical Bayesian technique is utilised to obtain the posterior density of model parameters and functions thereof. Unlike maximum likelihood estimation, where inference is only justified in large samples, the Bayesian densities are exact for any sample size. A series of simulation studies are conducted to compare the properties of point estimates, the distribution of option and bond prices, and the power of specification tests under maximum likelihood and Bayesian methods. Results suggest that maximum–likelihood–based asymptotic distributions have poor finite–sampleproperties.  相似文献   

6.
Abstract

In this paper I first define the regime-switching lognormal model. Monthly data from the Standard and Poor’s 500 and the Toronto Stock Exchange 300 indices are used to fit the model parameters, using maximum likelihood estimation. The fit of the regime-switching model to the data is compared with other common econometric models, including the generalized autoregressive conditionally heteroskedastic model. The distribution function of the regime-switching model is derived. Prices of European options using the regime-switching model are derived and implied volatilities explored. Finally, an example of the application of the model to maturity guarantees under equity-linked insurance is presented. Equations for quantile and conditional tail expectation (Tail-VaR) risk measures are derived, and a numerical example compares the regime-switching lognormal model results with those using the more traditional lognormal stock return model.  相似文献   

7.
Abstract

In this paper, the author reviews some aspects of Bayesian data analysis and discusses how a variety of actuarial models can be implemented and analyzed in accordance with the Bayesian paradigm using Markov chain Monte Carlo techniques via the BUGS (Bayesian inference Using Gibbs Sampling) suite of software packages. The emphasis is placed on actuarial loss models, but other applications are referenced, and directions are given for obtaining documentation for additional worked examples on the World Wide Web.  相似文献   

8.
Abstract

In this paper we investigate the valuation of investment guarantees in a multivariate (discrete-time) framework. We present how to build multivariate models in general, and we survey the most important multivariate GARCH models. A direct multivariate application of regime-switching models is also discussed, as is the estimation of these models using maximum likelihood and their comparison in a multivariate setting. The computation of the CTE provision is further presented. We have estimated the models with a multivariate dataset (Canada, United States, United Kingdom, and Japan), and we compared the quality of their fit using multiple criteria and tests. We observe that multivariate GARCH models provide a better overall fit than regime-switching models. However, regime-switching models appropriately represent the fat tails of the returns distribution, which is where most GARCH models fail. This leads to significant differences in the value of the CTE provisions, and, in general, provisions computed with regime-switching models are higher. Thus, the results from this multivariate analysis are in line with what was obtained in the literature of univariate models.  相似文献   

9.
Stochastic volatility (SV) models are theoretically more attractive than the GARCH type of models as it allows additional randomness. The classical SV models deduce a continuous probability distribution for volatility so that it does not admit a computable likelihood function. The estimation requires the use of Bayesian approach. A recent approach considers discrete stochastic autoregressive volatility models for a bounded and tractable likelihood function. Hence, a maximum likelihood estimation can be achieved. This paper proposes a general approach to link SV models under the physical probability measure, both continuous and discrete types, to their processes under a martingale measure. Doing so enables us to deduce the close-form expression for the VIX forecast for the both SV models and GARCH type models. We then carry out an empirical study to compare the performances of the continuous and discrete SV models using GARCH models as benchmark models.  相似文献   

10.
Abstract

Extract

While in some linear estimation problems the principle of unbiasedness can be said to be appropriate, we have just seen that in the present context we will have to appeal to other criteria. Let us first consider what we get from the maximum likelihood method. We do not claim any particular optimum property for this estimate of the risk distribution: it seems plausible however that one can prove a large sample result analogous to the classical result on maximum likelihood estimation.  相似文献   

11.
The probability of informed trading (PIN) is a commonly used market microstructure measure for detecting the level of information asymmetry. Estimating PIN can be problematic due to corner solutions, local maxima and floating point exceptions (FPE). Yan and Zhang [J. Bank. Finance, 2012, 36, 454–467] show that whilst factorization can solve FPE, boundary solutions appear frequently in maximum likelihood estimation for PIN. A grid search initial value algorithm is suggested to overcome this problem. We present a faster method for reducing the likelihood of boundary solutions and local maxima based on hierarchical agglomerative clustering (HAC). We show that HAC can be used to determine an accurate and fast starting value approximation for PIN. This assists the maximum likelihood estimation process in both speed and accuracy.  相似文献   

12.
We develop and implement a technique for closed-form maximum likelihood estimation (MLE) of multifactor affine yield models. We derive closed-form approximations to likelihoods for nine Dai and Singleton (2000) affine models. Simulations show our technique very accurately approximates true (but infeasible) MLE. Using US Treasury data, we estimate nine affine yield models with different market price of risk specifications. MLE allows non-nested model comparison using likelihood ratio tests; the preferred model depends on the market price of risk. Estimation with simulated and real data suggests our technique is much closer to true MLE than Euler and quasi-maximum likelihood (QML) methods.  相似文献   

13.
Evolving volatility is a dominant feature observed in most financial time series and a key parameter used in option pricing and many other financial risk analyses. A number of methods for non-parametric scale estimation are reviewed and assessed with regard to the stylized features of financial time series. A new non-parametric procedure for estimating historical volatility is proposed based on local maximum likelihood estimation for the t-distribution. The performance of this procedure is assessed using simulated and real price data and is found to be the best among estimators we consider. We propose that it replaces the moving variance historical volatility estimator.  相似文献   

14.
We estimate the binomial probit model to examine the significance of important explanatory variables documented in seasoned equity offering (SEO) underpricing literature using two statistical approaches: maximum likelihood estimation and Bayesian estimation. In particular, our estimation relies on SEO-related data in the Chinese financial market, where the pricing mechanism is less transparent compared to that in the U.S. market. We find that the signs of coefficients for the explanatory variables in each model are not different, but their magnitudes appear to be different. Our finding also shows that estimation results are generally consistent with the results observed in the U.S. market.  相似文献   

15.
In recent years, dynamic stochastic general equilibrium (DSGE) models have come to play an increasing role in central banks, as an aid in the formulation of monetary policy (and increasingly after the global crisis, for maintaining financial stability). DSGE models, compared to other widely prevalent econometric models (such as vector autoregressive or large-scale econometric models), are less a-theoretic and with secure micro-foundations based on the optimizing behaviour of rational economic agents. Additionally, the models in spite of being strongly tied to theory, can be ‘taken to the data’ in a meaningful way. A major feature of these models is that their theoretical underpinnings lie in what has now come to be called as the New Consensus Macroeconomics (NCM). This paper concentrates on the econometric structure underpinning such models. Identification, estimation and evaluation issues are discussed at length with a special emphasis on the role of Bayesian maximum likelihood methods.  相似文献   

16.
It is well known that the exponential dispersion family (EDF) of univariate distributions is closed under Bayesian revision in the presence of natural conjugate priors. However, this is not the case for the general multivariate EDF. This paper derives a second-order approximation to the posterior likelihood of a naturally conjugated generalised linear model (GLM), i.e., multivariate EDF subject to a link function (Section 5.5). It is not the same as a normal approximation. It does, however, lead to second-order Bayes estimators of parameters of the posterior. The family of second-order approximations is found to be closed under Bayesian revision. This generates a recursion for repeated Bayesian revision of the GLM with the acquisition of additional data. The recursion simplifies greatly for a canonical link. The resulting structure is easily extended to a filter for estimation of the parameters of a dynamic generalised linear model (DGLM) (Section 6.2). The Kalman filter emerges as a special case. A second type of link function, related to the canonical link, and with similar properties, is identified. This is called here the companion canonical link. For a given GLM with canonical link, the companion to that link generates a companion GLM (Section 4). The recursive form of the Bayesian revision of this GLM is also obtained (Section 5.5.3). There is a perfect parallel between the development of the GLM recursion and its companion. A dictionary for translation between the two is given so that one is readily derived from the other (Table 5.1). The companion canonical link also generates a companion DGLM. A filter for this is obtained (Section 6.3). Section 1.2 provides an indication of how the theory developed here might be applied to loss reserving. A sequel paper, providing numerical illustrations of this, is planned.  相似文献   

17.
Abstract

Ntzoufras and Dellaportas (2002) described four models for outstanding claim amounts of the “reported but not settled” variety. Two of the models incorporated claim-counts data in addition to the claim amounts themselves in order to add a hierarchical stage in the usual log-normal and state-space models. The purpose of this discussion is to describe how the models presented in Ntzoufras and Dellaportas may be implemented using WinBUGS. The use of WinBUGS to implement the Bayesian analysis of a number of other actuarial models was considered by Scollnik (2001).  相似文献   

18.
The realized-GARCH framework is extended to incorporate the two-sided Weibull distribution, for the purpose of volatility and tail risk forecasting in a financial time series. Further, the realized range, as a competitor for realized variance or daily returns, is employed as the realized measure in the realized-GARCH framework. Sub-sampling and scaling methods are applied to both the realized range and realized variance, to help deal with inherent micro-structure noise and inefficiency. A Bayesian Markov Chain Monte Carlo (MCMC) method is adapted and employed for estimation and forecasting, while various MCMC efficiency and convergence measures are employed to assess the validity of the method. In addition, the properties of the MCMC estimator are assessed and compared with maximum likelihood, via a simulation study. Compared to a range of well-known parametric GARCH and realized-GARCH models, tail risk forecasting results across seven market indices, as well as two individual assets, clearly favour the proposed realized-GARCH model incorporating the two-sided Weibull distribution; especially those employing the sub-sampled realized variance and sub-sampled realized range.  相似文献   

19.

Norberg (1989) analyses the heterogeneity in a portfolio of group life insurances using a parametric empirical Bayesian approach. In the present paper the model of Norberg is compared to a parametric fully Bayesian model and to a non-parametric fully Bayesian model.  相似文献   

20.
Abstract

Estimation of the tail index parameter of a single-parameter Pareto model has wide application in actuarial and other sciences. Here we examine various estimators from the standpoint of two competing criteria: efficiency and robustness against upper outliers. With the maximum likelihood estimator (MLE) being efficient but nonrobust, we desire alternative estimators that retain a relatively high degree of efficiency while also being adequately robust. A new generalized median type estimator is introduced and compared with the MLE and several well-established estimators associated with the methods of moments, trimming, least squares, quantiles, and percentile matching. The method of moments and least squares estimators are found to be relatively deficient with respect to both criteria and should become disfavored, while the trimmed mean and generalized median estimators tend to dominate the other competitors. The generalized median type performs best overall. These findings provide a basis for revision and updating of prevailing viewpoints. Other topics discussed are applications to robust estimation of upper quantiles, tail probabilities, and actuarial quantities, such as stop-loss and excess-of-loss reinsurance premiums that arise concerning solvency of portfolios. Robust parametric methods are compared with empirical nonparametric methods, which are typically nonrobust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号