首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Considering the growing need for managing financial risk, Value-at-Risk (VaR) prediction and portfolio optimisation with a focus on VaR have taken up an important role in banking and finance. Motivated by recent results showing that the choice of VaR estimator does not crucially influence decision-making in certain practical applications (e.g. in investment rankings), this study analyses the important question of how asset allocation decisions are affected when alternative VaR estimation methodologies are used. Focusing on the most popular, successful and conceptually different conditional VaR estimation techniques (i.e. historical simulation, peak over threshold method and quantile regression) and the flexible portfolio model of Campbell et al. [J. Banking Finance. 2001, 25(9), 1789–1804], we show in an empirical example and in a simulation study that these methods tend to deliver similar asset weights. In other words, optimal portfolio allocations appear to be not very sensitive to the choice of VaR estimator. This finding, which is robust in a variety of distributional environments and pre-whitening settings, supports the notion that, depending on the specific application, simple standard methods (i.e. historical simulation) used by many commercial banks do not necessarily have to be replaced by more complex approaches (based on, e.g. extreme value theory).  相似文献   

2.
ABSTRACT

In the context of predicting future claims, a fully Bayesian analysis – one that specifies a statistical model, prior distribution, and updates using Bayes's formula – is often viewed as the gold-standard, while Bühlmann's credibility estimator serves as a simple approximation. But those desirable properties that give the Bayesian solution its elevated status depend critically on the posited model being correctly specified. Here we investigate the asymptotic behavior of Bayesian posterior distributions under a misspecified model, and our conclusion is that misspecification bias generally has damaging effects that can lead to inaccurate inference and prediction. The credibility estimator, on the other hand, is not sensitive at all to model misspecification, giving it an advantage over the Bayesian solution in those practically relevant cases where the model is uncertain. This begs the question: does robustness to model misspecification require that we abandon uncertainty quantification based on a posterior distribution? Our answer to this question is No, and we offer an alternative Gibbs posterior construction. Furthermore, we argue that this Gibbs perspective provides a new characterization of Bühlmann's credibility estimator.  相似文献   

3.
It is well known that the exponential dispersion family (EDF) of univariate distributions is closed under Bayesian revision in the presence of natural conjugate priors. However, this is not the case for the general multivariate EDF. This paper derives a second-order approximation to the posterior likelihood of a naturally conjugated generalised linear model (GLM), i.e., multivariate EDF subject to a link function (Section 5.5). It is not the same as a normal approximation. It does, however, lead to second-order Bayes estimators of parameters of the posterior. The family of second-order approximations is found to be closed under Bayesian revision. This generates a recursion for repeated Bayesian revision of the GLM with the acquisition of additional data. The recursion simplifies greatly for a canonical link. The resulting structure is easily extended to a filter for estimation of the parameters of a dynamic generalised linear model (DGLM) (Section 6.2). The Kalman filter emerges as a special case. A second type of link function, related to the canonical link, and with similar properties, is identified. This is called here the companion canonical link. For a given GLM with canonical link, the companion to that link generates a companion GLM (Section 4). The recursive form of the Bayesian revision of this GLM is also obtained (Section 5.5.3). There is a perfect parallel between the development of the GLM recursion and its companion. A dictionary for translation between the two is given so that one is readily derived from the other (Table 5.1). The companion canonical link also generates a companion DGLM. A filter for this is obtained (Section 6.3). Section 1.2 provides an indication of how the theory developed here might be applied to loss reserving. A sequel paper, providing numerical illustrations of this, is planned.  相似文献   

4.
Using different loss functions in estimation and forecast evaluation of econometric models can cause suboptimal parameter estimates and inaccurate assessment of predictive ability. Though there are not general guidelines on how to choose the loss function, the modeling of Value-at-Risk is a rare instance is which the loss function for forecasting evaluation is well defined. Within the context of the RiskMetrics methodology, which is the most popular to calculate Value-at-Risk, we investigate the implications of considering different loss functions in estimation and forecasting evaluation. Based on U.S. equity, exchange rates, and bond market data we find that there can be substantial differences on the estimates under alternative loss functions. On calculating the 99% VaR for a 10-day horizon, the RiskMetrics model for equity markets overestimates substantially the decay factor. However, the out-of-sample performance is not systematically superior by using the estimates under the correct loss function.  相似文献   

5.
Many empirical researches report that value-at-risk (VaR) measures understate the actual 1% quantile, while for Inui, K., Kijima, M. and Kitano, A., VaR is subject to a significant positive bias. Stat. Probab. Lett., 2005, 72, 299–311. proved that VaR measures overstate significantly when historical simulation VaR is applied to fat-tail distributions. This paper resolves the puzzle by developing a regime switching model to estimate portfolio VaR. It is shown that our model is able to correct the underestimation problem of risk.  相似文献   

6.
This paper proposes a new methodology to compute Value at Risk (VaR) for quantifying losses in credit portfolios. We approximate the cumulative distribution of the loss function by a finite combination of Haar wavelet basis functions and calculate the coefficients of the approximation by inverting its Laplace transform. The Wavelet Approximation (WA) method is particularly suitable for non-smooth distributions, often arising in small or concentrated portfolios, when the hypothesis of the Basel II formulas are violated. To test the methodology we consider the Vasicek one-factor portfolio credit loss model as our model framework. WA is an accurate, robust and fast method, allowing the estimation of the VaR much more quickly than with a Monte Carlo (MC) method at the same level of accuracy and reliability.  相似文献   

7.
This paper studies the parameter estimation problem for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. Estimation is regarded as the principal challenge in applying these models since they were proposed by Barndorff-Nielsen and Shephard [J. R. Stat. Soc. Ser. B, 2001, 63(2), 167–241]. Most previous work has used a Bayesian paradigm, whereas we treat the problem in the framework of maximum likelihood estimation, applying gradient-based simulation optimization. A hidden Markov model is introduced to formulate the likelihood of observations; sequential Monte Carlo is applied to sample the hidden states from the posterior distribution; smooth perturbation analysis is used to deal with the discontinuities introduced by jumps in estimating the gradient. Numerical experiments indicate that the proposed gradient-based simulated maximum likelihood estimation approach provides an efficient alternative to current estimation methods.  相似文献   

8.
Option hedging is a critical risk management problem in finance. In the Black–Scholes model, it has been recognized that computing a hedging position from the sensitivity of the calibrated model option value function is inadequate in minimizing variance of the option hedge risk, as it fails to capture the model parameter dependence on the underlying price (see e.g. Coleman et al., J. Risk, 2001, 5(6), 63–89; Hull and White, J. Bank. Finance, 2017, 82, 180–190). In this paper, we demonstrate that this issue can exist generally when determining hedging position from the sensitivity of the option function, either calibrated from a parametric model from current option prices or estimated nonparametricaly from historical option prices. Consequently, the sensitivity of the estimated model option function typically does not minimize variance of the hedge risk, even instantaneously. We propose a data-driven approach to directly learn a hedging function from the market data by minimizing variance of the local hedge risk. Using the S&P 500 index daily option data for more than a decade ending in August 2015, we show that the proposed method outperforms the parametric minimum variance hedging method proposed in Hull and White [J. Bank. Finance, 2017, 82, 180–190], as well as minimum variance hedging corrective techniques based on stochastic volatility or local volatility models. Furthermore, we show that the proposed approach achieves significant gain over the implied BS delta hedging for weekly and monthly hedging.  相似文献   

9.
10.
We analyse the dynamic dependence structure between broad stock market indexes from the United States (S&P500), Britain (FTSE100), Brazil (BOVESPA) and Mexico (PCMX). We employ Patton’s [Int. Econ. Rev., 2006, 2, 527–556] conditional copula setting and additionally observe the impact of different copula functions on Value at Risk (VaR) estimation. We conclude that the dependence between BOVESPA and the other indexes has intensified since the beginning of 2007. In our case the particular copula form is not crucial for VaR estimation. A goodness-of-fit test based on the parametric bootstrap is also applied. The best fits are obtained via time constant Student-t and time-varying Normal copulas.  相似文献   

11.
The calculus of VaR involves dealing with the confidence level, the time horizon and the true underlying conditional distribution function of asset returns. In this paper, we shall examine the effects of using a specific distribution function that fits well the low-tail data of the observed distribution of asset returns on the accuracy of VaR estimates. In our analysis, we consider some distributional forms characterized by capturing the excess kurtosis characteristic of stock return distributions and we compare their performance using some international stock indices. JEL Classification C15 · G10  相似文献   

12.
The aim of this paper is to apply a nonparametric methodology developed by Donoho et al(2003 IEEE Trans. Signal Processing 53614–27) for estimating an autocovariance sequence to the statistical analysis of the return of securities and discuss the advantages offered by this approach over other existing methods such as fixed-window-length segmentation procedures. Theoretical properties of adaptivity of this estimation method have been proved for a specific class of time series, namely the class of locally stationary processes, with an autocovariance structure which varies slowly over time in most cases but might exhibit abrupt changes of regime. This method is based on an algorithm that selects empirically from the data the tiling of the time–frequency plane which exposes best in the least-squares sense the underlying second-order time-varying structure of the time series, and so may properly describe the time-inhomogeneous variations of speculative prices. The applications we consider here mainly concern the analysis of structural changes occurring in stock market returns, VaR estimation and the comparison between the variation structure of stock index returns in developed markets and in developing markets.  相似文献   

13.
Abstract

This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three worked illustrative examples, two of which feature actual insurance claims data.  相似文献   

14.
This study develops an optimal insurance contract endogenously under a value-at-risk (VaR) constraint. Although Wang et al. [2005] had examined this problem, their assumption implied that the insured is risk neutral. Consequently, this study extends Wang et al. [2005] and further considers a more realistic situation where the insured is risk averse. The study derives the optimal insurance contract as a single deductible insurance when the VaR constraint is redundant or as a double deductible insurance when the VaR constraint is binding. Finally, this study discusses the optimal coverage level from common forms of insurances, including deductible insurance, upper-limit insurance, and proportional coinsurance. JEL Classification G22  相似文献   

15.
16.
Despite well-known shortcomings as a risk measure, Value-at-Risk (VaR) is still the industry and regulatory standard for the calculation of risk capital in banking and insurance. This paper is concerned with the numerical estimation of the VaR for a portfolio position as a function of different dependence scenarios on the factors of the portfolio. Besides summarizing the most relevant analytical bounds, including a discussion of their sharpness, we introduce a numerical algorithm which allows for the computation of reliable (sharp) bounds for the VaR of high-dimensional portfolios with dimensions d possibly in the several hundreds. We show that additional positive dependence information will typically not improve the upper bound substantially. In contrast higher order marginal information on the model, when available, may lead to strongly improved bounds. Several examples of practical relevance show how explicit VaR bounds can be obtained. These bounds can be interpreted as a measure of model uncertainty induced by possible dependence scenarios.  相似文献   

17.
This paper studies seven GARCH models, including RiskMetrics and two long memory GARCH models, in Value at Risk (VaR) estimation. Both long and short positions of investment were considered. The seven models were applied to 12 market indices and four foreign exchange rates to assess each model in estimating VaR at various confidence levels. The results indicate that both stationary and fractionally integrated GARCH models outperform RiskMetrics in estimating 1% VaR. Although most return series show fat-tailed distribution and satisfy the long memory property, it is more important to consider a model with fat-tailed error in estimating VaR. Asymmetric behavior is also discovered in the stock market data that t-error models give better 1% VaR estimates than normal-error models in long position, but not in short position. No such asymmetry is observed in the exchange rate data.  相似文献   

18.
This paper proposes a two-step methodology for Value-at-Risk prediction. The first step involves estimation of a GARCH model using quasi-maximum likelihood estimation and the second step uses model filtered returns with the skewed t distribution of Azzalini and Capitanio [J. R. Stat. Soc. B, 2003, 65, 367–389]. The predictive performance of this method is compared to the single-step joint estimation of the same data generating process, to the well-known GARCH-Evt model and to a comprehensive set of other market risk models. Backtesting results show that the proposed two-step method outperforms most benchmarks including the classical joint estimation method of same data generating process and it performs competitively with respect to the GARCH-Evt model. This paper recommends two robust models to risk managers of emerging market stock portfolios. Both models are estimated in two steps: the GJR-GARCH-Evt model and the two-step GARCH-St model proposed in this study.  相似文献   

19.
A new approach for using Lévy processes to compute value-at-risk (VaR) using high-frequency data is presented in this paper. The approach is a parametric model using an ARMA(1,1)-GARCH(1,1) model where the tail events are modelled using fractional Lévy stable noise and Lévy stable distribution. Using high-frequency data for the German DAX Index, the VaR estimates from this approach are compared to those of a standard nonparametric estimation method that captures the empirical distribution function, and with models where tail events are modelled using Gaussian distribution and fractional Gaussian noise. The results suggest that the proposed parametric approach yields superior predictive performance.  相似文献   

20.
The Value at Risk (VaR) is a risk measure that is widely used by financial institutions in allocating risk. VaR forecast estimation involves the conditional evaluation of quantiles based on the currently available information. Recent advances in VaR evaluation incorporate conditional variance into the quantile estimation, yielding the Conditional Autoregressive VaR (CAViaR) models. However, the large number of alternative CAViaR models raises the issue of identifying the optimal quantile predictor. To resolve this uncertainty, we propose a Bayesian encompassing test that evaluates various CAViaR models predictions against a combined CAViaR model based on the encompassing principle. This test provides a basis for forecasting combined conditional VaR estimates when there are evidences against the encompassing principle. We illustrate this test using simulated and financial daily return data series. The results demonstrate that there are evidences for using combined conditional VaR estimates when forecasting quantile risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号