首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Quantitative Finance》2013,13(3):163-172
Abstract

Support vector machines (SVMs) are a new nonparametric tool for regression estimation. We will use this tool to estimate the parameters of a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we will show that GARCH models can be estimated using SVMs and that such estimates have a higher predicting ability than those obtained via common ML methods.  相似文献   

2.
Abstract

This case study illustrates the analysis of two possible regression models for bivariate claims data. Estimates or forecasts of loss distributions under these two models are developed using two methods of analysis: (1) maximum likelihood estimation and (2) the Bayesian method. These methods are applied to two data sets consisting of 24 and 1,500 paired observations, respectively. The Bayesian analyses are implemented using Markov chain Monte Carlo via WinBUGS, as discussed in Scollnik (2001). A comparison of the analyses reveals that forecasted total losses can be dramatically underestimated by the maximum likelihood estimation method because it ignores the inherent parameter uncertainty.  相似文献   

3.
The probability of informed trading (PIN) is a commonly used market microstructure measure for detecting the level of information asymmetry. Estimating PIN can be problematic due to corner solutions, local maxima and floating point exceptions (FPE). Yan and Zhang [J. Bank. Finance, 2012, 36, 454–467] show that whilst factorization can solve FPE, boundary solutions appear frequently in maximum likelihood estimation for PIN. A grid search initial value algorithm is suggested to overcome this problem. We present a faster method for reducing the likelihood of boundary solutions and local maxima based on hierarchical agglomerative clustering (HAC). We show that HAC can be used to determine an accurate and fast starting value approximation for PIN. This assists the maximum likelihood estimation process in both speed and accuracy.  相似文献   

4.
This paper studies the parameter estimation problem for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. Estimation is regarded as the principal challenge in applying these models since they were proposed by Barndorff-Nielsen and Shephard [J. R. Stat. Soc. Ser. B, 2001, 63(2), 167–241]. Most previous work has used a Bayesian paradigm, whereas we treat the problem in the framework of maximum likelihood estimation, applying gradient-based simulation optimization. A hidden Markov model is introduced to formulate the likelihood of observations; sequential Monte Carlo is applied to sample the hidden states from the posterior distribution; smooth perturbation analysis is used to deal with the discontinuities introduced by jumps in estimating the gradient. Numerical experiments indicate that the proposed gradient-based simulated maximum likelihood estimation approach provides an efficient alternative to current estimation methods.  相似文献   

5.
Abstract

In [1] it was shown that under mild conditions, with probability approaching unity as n increases the likelihood function has a relative maximum in a fixed open interval centered at the true value of the parameter. This paper strengthens the result by having the length of the interval approach zero as n increases. An application is given.  相似文献   

6.
Abstract

In this paper we investigate the valuation of investment guarantees in a multivariate (discrete-time) framework. We present how to build multivariate models in general, and we survey the most important multivariate GARCH models. A direct multivariate application of regime-switching models is also discussed, as is the estimation of these models using maximum likelihood and their comparison in a multivariate setting. The computation of the CTE provision is further presented. We have estimated the models with a multivariate dataset (Canada, United States, United Kingdom, and Japan), and we compared the quality of their fit using multiple criteria and tests. We observe that multivariate GARCH models provide a better overall fit than regime-switching models. However, regime-switching models appropriately represent the fat tails of the returns distribution, which is where most GARCH models fail. This leads to significant differences in the value of the CTE provisions, and, in general, provisions computed with regime-switching models are higher. Thus, the results from this multivariate analysis are in line with what was obtained in the literature of univariate models.  相似文献   

7.
Abstract

A sequence of maximum likelihood estimators based on a sequence of independent but not necessarily identically distributed random variables is shown to be consistent under certain assumptions. Some examples are given to show that these assumptions are easy to verify and not very restrictive.  相似文献   

8.
Abstract

By analysing a large data set of daily returns with the maximum likelihood data clustering technique, we identify economic sectors as clusters of assets with a similar economic dynamics. The sector size distribution follows Zipf's law. Secondly, we find that patterns of daily market-wide economic activity cluster into classes that can be identified with market states. The distribution of frequencies of market states shows scale-free properties and the memory of the market state process extends to long times (~50 days). Assets in the same sector behave similarly across states. We characterize market efficiency by analysing the market's predictability and find that the market is indeed close to being efficient. We find evidence of the existence of a dynamic pattern after the market's crashes.  相似文献   

9.
Brockman and Turtle [J. Finan. Econ., 2003, 67, 511–529] develop a barrier option framework to show that default barriers are significantly positive. Most implied barriers are typically larger than the book value of corporate liabilities. We show theoretically and empirically that this result is biased due to the approximation of the market value of corporate assets by the sum of the market value of equity and the book value of liabilities. This approximation leads to a significant overestimation of the default barrier. To eliminate this bias, we propose a maximum likelihood (ML) estimation approach to estimate the asset values, asset volatilities, and default barriers. The proposed framework is applied to empirically examine the default barriers of a large sample of industrial firms. This paper documents that default barriers are positive, but not very significant. In our sample, most of the estimated barriers are lower than the book values of corporate liabilities. In addition to the problem with the default barriers, we find significant biases on the estimation of the asset value and the asset volatility of Brockman and Turtle.  相似文献   

10.
Abstract

Estimation of the tail index parameter of a single-parameter Pareto model has wide application in actuarial and other sciences. Here we examine various estimators from the standpoint of two competing criteria: efficiency and robustness against upper outliers. With the maximum likelihood estimator (MLE) being efficient but nonrobust, we desire alternative estimators that retain a relatively high degree of efficiency while also being adequately robust. A new generalized median type estimator is introduced and compared with the MLE and several well-established estimators associated with the methods of moments, trimming, least squares, quantiles, and percentile matching. The method of moments and least squares estimators are found to be relatively deficient with respect to both criteria and should become disfavored, while the trimmed mean and generalized median estimators tend to dominate the other competitors. The generalized median type performs best overall. These findings provide a basis for revision and updating of prevailing viewpoints. Other topics discussed are applications to robust estimation of upper quantiles, tail probabilities, and actuarial quantities, such as stop-loss and excess-of-loss reinsurance premiums that arise concerning solvency of portfolios. Robust parametric methods are compared with empirical nonparametric methods, which are typically nonrobust.  相似文献   

11.
Abstract

The log normal reserving model is considered. The contribution of the paper is to derive explicit expressions for the maximum likelihood estimators. These are expressed in terms of development factors which are geometric averages. The distribution of the estimators is derived. It is shown that the analysis is invariant to traditional measures for exposure.  相似文献   

12.
《Quantitative Finance》2013,13(5):376-384
Abstract

Volatility plays an important role in derivatives pricing, asset allocation, and risk management, to name but a few areas. It is therefore crucial to make the utmost use of the scant information typically available in short time windows when estimating the volatility. We propose a volatility estimator using the high and the low information in addition to the close price, all of which are typically available to investors. The proposed estimator is based on a maximum likelihood approach. We present explicit formulae for the likelihood of the drift and volatility parameters when the underlying asset is assumed to follow a Brownian motion with constant drift and volatility. Our approach is to then maximize this likelihood to obtain the estimator of the volatility. While we present the method in the context of a Brownian motion, the general methodology is applicable whenever one can obtain the likelihood of the volatility parameter given the high, low and close information. We present simulations which indicate that our estimator achieves consistently better performance than existing estimators (that use the same information and assumptions) for simulated data. In addition, our simulations using real price data demonstrate that our method produces more stable estimates. We also consider the effects of quantized prices and discretized time.  相似文献   

13.
Abstract

In this paper I first define the regime-switching lognormal model. Monthly data from the Standard and Poor’s 500 and the Toronto Stock Exchange 300 indices are used to fit the model parameters, using maximum likelihood estimation. The fit of the regime-switching model to the data is compared with other common econometric models, including the generalized autoregressive conditionally heteroskedastic model. The distribution function of the regime-switching model is derived. Prices of European options using the regime-switching model are derived and implied volatilities explored. Finally, an example of the application of the model to maturity guarantees under equity-linked insurance is presented. Equations for quantile and conditional tail expectation (Tail-VaR) risk measures are derived, and a numerical example compares the regime-switching lognormal model results with those using the more traditional lognormal stock return model.  相似文献   

14.
ABSTRACT

The paper studies the one-year estimation uncertainty associated with using credibility-based loss reserving methods, when claim development can be described by the models of Bühlmann-Straub or Hesselager-Witting. Having found a formula, it seems natural to minimise the one-year estimation uncertainty in the same way as one can minimise the ultimate uncertainty, i.e. to minimise the MSEP. It turns out that minimisation of the one-year estimation uncertainty leads to unreasonable and unsightly results. This puts into question the sanity of the concept of one-year estimation uncertainty.  相似文献   

15.
Abstract

This paper falls into three parts. The first part (Sections 1 to 4) consists of an attempt to consolidate the existing credibility models, which are rapidly becoming more numerous, by constructing one large all-embracing model. In order to do this, it is necessary to abstract from the usual setting of Credibility Theory, the insurance world, and work in more abstract Hilbert spaces. This increases the complexity of the theory, but the results appear nevertheless in a form familiar to credibility theorists.

The second part (Section 5) shows that, in the so-called inhomogeneous case of credibility estimation, best estimates are obtained if one uses sufficient statistics, a result which has previously been known under much more restricted circumstances. In the same section it is pointed out that the validity of the analogous result in the homogeneous case is by no means obvious.

The third part (Section 6) demonstrates that a number of existing credibility models can be derived as special cases of the model developed here, and goes on to deal briefly with a couple of new models which arise naturally out of the abstract credibility formulation.  相似文献   

16.
Abstract

Phase-type distributions are one of the most general classes of distributions permitting a Markovian interpretation. Sparre Andersen risk models with phase-type claim interarrival times or phase-type claims can be analyzed using Markovian techniques, and results can be expressed in compact matrix forms. Computations involved are readily programmable in practice.

This paper studies some quantities associated with the first passage time and the time of ruin in a Sparre Andersen risk model with phase-type interclaim times. In an earlier discussion the present author obtained a matrix expression for the Laplace transform of the first time that the surplus process reaches a given target from the initial surplus. Using this result, we analyze (1) the Laplace transform of the recovery time after ruin, (2) the probability that the surplus attains a certain level before ruin, and (3) the distribution of the maximum severity of ruin. We also give a matrix expression for the expected discounted dividend payments prior to ruin for the Sparre Andersen model in the presence of a constant dividend barrier.  相似文献   

17.
Abstract

1. Introduction and Summary

The spectral analysis plays an important rôle in the study of stationary stochastic process. It cannot, however, always, be assumed that the nature of the corresponding spectral function is known a priori—we are then faced with two problems. In the first we may have either a discrete spectrum plus a uniform noise or a continuous spectrum and in the second we may have both at the same time. A possible method has been suggested by Whittle [5] as a solution to the second problem. A discriminatory test based on the likelihood ratio has been put forward by Bartlett as a solution to the first problem which is an important one occurring in practice. The test procedure was applied to two suitable artificial series. The test, when applied, to a series with a harmonic element resulted in the failure to arrive atadecision. An investigation was then made on the applicability of this test to such series in general.  相似文献   

18.
For fitting a parametric copula to multivariate data, a popular way is to employ the so-called pseudo maximum likelihood estimation proposed by Genest, Ghoudi, and Rivest. Although interval estimation can be obtained via estimating the asymptotic covariance of the pseudo maximum likelihood estimation, we propose a jackknife empirical likelihood method to construct confidence regions for the parameters without estimating any additional quantities such as the asymptotic covariance. A simulation study shows the advantages of the new method in case of strong dependence or having more than one parameter involved.  相似文献   

19.
1. The Estimation Problem.

In problems of statistical estimation, we are concerned with certain variables assumed to be random variables having more or less unknown probability distributions. A number of observed values of these variables are given, and it is required to use these values to learn something about the unknown distributions.  相似文献   

20.
Abstract

This paper examines the so-called 1/n investment puzzle that has been observed in defined contribution plans whereby some participants divide their contributions equally among the available asset classes. It has been argued that this is a very naive strategy since it contradicts the fundamental tenets of modern portfolio theory. We use simple arguments to show that this behavior is perhaps less naive than it at first appears. It is well known that the optimal portfolio weights in a mean-variance setting are extremely sensitive to estimation errors, especially those in the expected returns. We show that when we account for estimation error, the 1/n rule has some advantages in terms of robustness; we demonstrate this with numerical experiments. This rule can provide a risk-averse investor with protection against very bad outcomes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号