首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
针对非参数核密度估计中最优窗宽的选择在实际建模中的不足,提出了一个新的最优窗宽选择的迭代方法,克服了使用传统的经验法则所带来的局限性。并在此基础上用一种新的非参数核密度估计ML方法应用到了中国股票市场,通过与极大似然估计对比论证了此方法的有效性和可行性。实证分析表明,通过与实际值的模拟对比,运用非参数估计技术得到上证指数日收益率的拟合值要优于极大似然估计的拟合值。  相似文献   

2.
《价值工程》2016,(19):179-181
针对风险价值VaR的置信水平在真实分布与拟合分布下存在差异的问题,构建了修正因子对拟合分布下VaR的置信水平进行修正。利用Kolmogorov-Smirnov检验法得到拟合曲线对金融收益历史数据的拟合优度,并结合拟合分布对真实数据分布的估计偏差构建修正因子。通过对标准正态分布随机数的数值模拟和基于上证指数对数据收益率进行的实证分析,验证了所构建的修正因子的合理性。  相似文献   

3.
操作风险的计量是操作风险管理的基础,只有准确地对操作风险进行计量才有可能实施有效的风险管理。文章利用贝叶斯估计方法,基于共轭分布原则,结合我国商业银行业务特点,开创性提出了对于损失强度分布的两阶段拟合方法.损失强度在阁值左侧服从对数正态分布,在阈值右侧服从帕累托分布,以更好的拟合操作风险中心数据和尾部数据的分布特征.最后利用得到的某商业银行的损失数据进行实证研究,得到该银行各产品线所需经济资本,与该银行资产基本相符。  相似文献   

4.
基于贝叶斯估计的商业银行操作风险计量与管理   总被引:2,自引:0,他引:2  
操作风险的计量是操作风险管理的基础,只有准确地对操作风险进行计量才有可能实施有效的风险管理。文章利用贝叶斯估计方法,基于共轭分布原则,结合我国商业银行业务特点,开创性提出了对于损失强度分布的两阶段拟合方法.损失强度在阁值左侧服从对数正态分布,在阈值右侧服从帕累托分布,以更好的拟合操作风险中心数据和尾部数据的分布特征.最后利用得到的某商业银行的损失数据进行实证研究,得到该银行各产品线所需经济资本,与该银行资产基本相符。  相似文献   

5.
文章在实证调研的基础上,根据回收的问卷,采用列联表分析和非参数核密度估计的方法对问卷数据进行分析。通过对问卷问题的列联表分析,分析出了各出行方向与出行目的的相关情况。而通过核密度估计,估计出了武昌站旅客收入以及能够忍受加价幅度的概率分布,发现武昌站大部分旅客都属于中低收入者,并不能忍受较大的加价。  相似文献   

6.
确定金融资产收益率分布形式的一种方法   总被引:6,自引:0,他引:6  
准确刻画金融资产收益率的分布函数是金融市场风险测量与管理的基础。本文在对证券收益率的正态性检验基础上,提出了确定金融资产收益率分布函数的一种方法——混合辨析法,并就两种混合正态分布的情形进行了仿真计算,最后利用柯尔莫哥洛夫拟合优度法对拟合收益率的曲线进行了检验。  相似文献   

7.
基于极值分布理论的VaR与ES度量   总被引:4,自引:0,他引:4  
本文应用极值分布理论对金融收益序列的尾部进行估计,计算收益序列的在险价值VaR和预期不足ES来度量市场风险。通过伪最大似然估计方法估计的GARCH模型对收益数据进行拟合,应用极值理论中的GPD对新息分布的尾部建模,得到了基于尾部估计产生收益序列的VaR和ES值。采用上证指数日对数收益数据为样本,得到了度量条件极值和无条件极值下VaR和ES的结果。实证研究表明:在置信水平很高(如99%)的条件下,采用极值方法度量风险值效果更好。而置信水平在95%下,其他方法和极值方法结合效果会很好。用ES度量风险能够使我们了解不利情况发生时风险的可能情况。  相似文献   

8.
《价值工程》2015,(25):214-215
非参数方法是概率统计学的一个分支。核密度估计在估计边界区域的时候会出现边界效应。我们证明了所给出的非参数条件核密度估计hn*(m,n)的一致强相合性。  相似文献   

9.
刘晓明  邸彦彬 《活力》2012,(4):81-81
为了改善GP5大地高向正常高转换的精度,在局部区域内,建立多面函数模型进行高程拟合.可以达到较高的精度。文中利用多面函数模型进行高程拟合,除选取分布均匀的GPS水准联测点外,还对核函数形式的选取做了详细地分析,并与高程拟合中常用的二次曲面拟合法做了对比,进行了精度分析。  相似文献   

10.
王霞  张本涛  马庆 《价值工程》2011,30(26):64-65
本文以经济净现值为评价指标来度量项目的投资风险,确定各影响因素的概率分布,建立了基于三角分布的风险评价的随机模型,采用蒙特卡罗方法进行模拟,利用MATLAB编程实现,得到投资项目的净现值频数分布的直方图和累计频率曲线图,并对模拟结果进行统计和分析,可得到净现值的平均预测值以及风险率,为评价投资项目的风险提供了理论依据。  相似文献   

11.
This paper generalizes the Dynamic Conditional Correlation (DCC) model of Engle (2002), incorporating a flexible non-Gaussian distribution based on Gram-Charlier expansions. The resulting semi-nonparametric-DCC (SNP-DCC) model allows estimation in two stages and deals with the negativity problem which is inherent in truncated SNP densities. We test the performance of a SNP-DCC model with respect to the (Gaussian)-DCC through an empirical application of density forecasting for portfolio returns. Our results show that the proposed multivariate model provides a better in-sample fit and forecast of the portfolio returns distribution, and thus is useful for financial risk forecasting and evaluation.  相似文献   

12.
This paper studies the estimation of the pricing kernel and explains the pricing kernel puzzle found in the FTSE 100 index. We use prices of options and futures on the FTSE 100 index to derive the risk neutral density (RND). The option-implied RND is inverted by using two nonparametric methods: the implied-volatility surface interpolation method and the positive convolution approximation (PCA) method. The actual density distribution is estimated from the historical data of the FTSE 100 index by using the threshold GARCH (TGARCH) model. The results show that the RNDs derived from the two methods above are relatively negatively skewed and fat-tailed, compared to the actual probability density, that is consistent with the phenomenon of “volatility smile.” The derived risk aversion is found to be locally increasing at the center, but decreasing at both tails asymmetrically. This is the so-called pricing kernel puzzle. The simulation results based on a representative agent model with two state variables show that the pricing kernel is locally increasing with the wealth at the level of 1 and is consistent with the empirical pricing kernel in shape and magnitude.  相似文献   

13.
In this paper, we investigate the goodness-of-fit of three Lévy processes, namely Variance-Gamma (VG), Normal-Inverse Gaussian (NIG) and Generalized Hyperbolic (GH) distributions, and probability distribution of the Heston model to index returns of twenty developed and emerging stock markets. Furthermore, we extend our analysis by applying a Markov regime switching model to identify normal and turbulent periods. Our findings indicate that the probability distribution of the Heston model performs well for emerging markets under full sample estimation and retains goodness of fit for high volatility periods, as it explicitly accounts for the volatility process. On the other hand, the distributions of the Lévy processes, especially the VG and NIG distributions, generally improves upon the fit of the Heston model, particularly for developed markets and low volatility periods. Furthermore, some distributions yield to significantly large test statistics for some countries, even though they fit well to other markets, which suggest that properties of the stock markets are crucial in identifying the best distribution representing empirical returns.  相似文献   

14.
Given that underlying assets in financial markets exhibit stylized facts such as leptokurtosis, asymmetry, clustering properties and heteroskedasticity effect, this paper applies the stochastic volatility models driven by tempered stable Lévy processes to construct time changed tempered stable Lévy processes (TSSV) for financial risk measurement and portfolio reversion. The TSSV model framework permits infinite activity jump behaviors of returns dynamics and time varying volatility consistently observed in financial markets by introducing time changing volatility into tempered stable processes which specially refer to normal tempered stable (NTS) distribution as well as classical tempered stable (CTS) distribution, capturing leptokurtosis, fat tailedness and asymmetry features of returns in addition to volatility clustering effect in stochastic volatility. Through employing the analytical characteristic function and fast Fourier transform (FFT) technique, the closed form formulas for probability density function (PDF) of returns, value at risk (VaR) and conditional value at risk (CVaR) can be derived. Finally, in order to forecast extreme events and volatile market, we perform empirical researches on Hangseng index to measure risks and construct portfolio based on risk adjusted reward risk stock selection criteria employing TSSV models, with the stochastic volatility normal tempered stable (NTSSV) model producing superior performances relative to others.  相似文献   

15.
Value at risk (VaR) is a commonly used tool to measure market risk. In this paper, we discuss the problems of model choice and VaR performance. The VaRs of daily returns of the Shanghai and Shenzhen indexes are calculated using equally weighted moving average (EQMA), exponentially weighted moving average (EWMA), GARCH(1,1), empirical density estimation method, and the Pareto-type extreme-value distribution methods. Considering the length of the window and the requirement for adequate capital, back testing indicates that the Pareto-type extreme-value distribution method reflects the real market risk more accurately than the other models.  相似文献   

16.
In this study Variance-Gamma (VG) and Normal-Inverse Gaussian (NIG) distributions are compared with the benchmark of generalized hyperbolic distribution in terms of their fit to the empirical distribution of high-frequency stock market index returns in China. First, we estimate the considered models in a Markov regime switching framework for the identification of different volatility regimes. Second, the goodness-of-fit results are compared at different time scales of log-returns. Third, the goodness-of-fit results are validated through bootstrapping experiments. Our results show that as the time scale of log-returns decrease NIG model outperforms the VG model consistently and the difference between the goodness-of-fit statistics increase. For high-frequency Chinese index returns, NIG model is more robust and provides a better fit to the empirical distributions of returns at different time scales.  相似文献   

17.
We develop a set of statistics to represent the option‐implied stochastic discount factor and we apply them to S&P 500 returns between 1990 and 2012. Our statistics, which we call state prices of conditional quantiles (SPOCQ), estimate the market's willingness to pay for insurance against outcomes in various quantiles of the return distribution. By estimating state prices at conditional quantiles, we separate variation in the shape of the pricing kernel from variation in the probability of a particular event. Thus, without imposing strong assumptions about the distribution of returns, we obtain a novel view of pricing‐kernel dynamics. We document six features of SPOCQ for the S&P 500. Most notably, and in contrast to recent studies, we find that the price of downside risk decreases when volatility increases. Under a standard asset pricing model, this result implies that most changes in volatility stem from fluctuations in idiosyncratic risk. Consistent with this interpretation, no known systematic risk factors such as consumer sentiment, liquidity or macroeconomic risk can account for the negative relationship between the price of downside risk and volatility. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.

We propose a kernel-based Bayesian framework for the analysis of stochastic frontiers and efficiency measurement. The primary feature of this framework is that the unknown distribution of inefficiency is approximated by a transformed Rosenblatt-Parzen kernel density estimator. To justify the kernel-based model, we conduct a Monte Carlo study and also apply the model to a panel of U.S. large banks. Simulation results show that the kernel-based model is capable of providing more precise estimation and prediction results than the commonly-used exponential stochastic frontier model. The Bayes factor also favors the kernel-based model over the exponential model in the empirical application.

  相似文献   

19.
In this paper we model Value‐at‐Risk (VaR) for daily asset returns using a collection of parametric univariate and multivariate models of the ARCH class based on the skewed Student distribution. We show that models that rely on a symmetric density distribution for the error term underperform with respect to skewed density models when the left and right tails of the distribution of returns must be modelled. Thus, VaR for traders having both long and short positions is not adequately modelled using usual normal or Student distributions. We suggest using an APARCH model based on the skewed Student distribution (combined with a time‐varying correlation in the multivariate case) to fully take into account the fat left and right tails of the returns distribution. This allows for an adequate modelling of large returns defined on long and short trading positions. The performances of the univariate models are assessed on daily data for three international stock indexes and three US stocks of the Dow Jones index. In a second application, we consider a portfolio of three US stocks and model its long and short VaR using a multivariate skewed Student density. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号