首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
Optimisation problems in finance commonly have non-linear constraints for which previous solutions have required unrealistic assumptions. However, many of these can be efficiently solved as semidefinite programming (SDP) problems, which have less restrictive assumptions. Through review of the literature that uses SDP in finance, two major research streams are identified: portfolio optimisation and option pricing. Nevertheless, many finance researchers are unaware of SDP. One possible reason is that this research is often published in non-finance journals. This paper aims to better integrate the SDP research to promote wider use of current findings and further interdisciplinary research, particularly in environmental finance.  相似文献   

2.
Financial time series have two features which, in many cases, prevent the use of conventional estimators of volatilities and correlations: leptokurtotic distributions and contamination of data with outliers. Other techniques are required to achieve stable and accurate results. In this paper, we review robust estimators for volatilities and correlations and identify those best suited for use in risk management. The selection criteria were that the estimator should be stable to both fractionally small departures for all data points (fat tails), and to fractionally large departures for a small number of data points (outliers). Since risk management typically deals with thousands of time series at once, another major requirement was the independence of the approach of any manual correction or data pre-processing. We recommend using volatility t-estimators, for which we derived the estimation error formula for the case when the exact shape of the data distribution is unknown. A convenient robust estimator for correlations is Kendall's tau, whose drawback is that it does not guarantee the positivity of the correlation matrix. We chose to use geometric optimization that overcomes this problem by finding the closest correlation matrix to a given matrix in terms of the Hadamard norm. We propose the weights for the norm and demonstrate the efficiency of the algorithm on large-scale problems.  相似文献   

3.
We discuss a weighted estimation of correlation and covariance matrices from historical financial data. To this end, we introduce a weighting scheme that accounts for the similarity of previous market conditions to the present situation. The resulting estimators are less biased and show lower variance than either unweighted or exponentially weighted estimators. The weighting scheme is based on a similarity measure that compares the current correlation structure of the market to the structures at past times. Similarity is then measured by the matrix 2-norm of the difference of probe correlation matrices estimated for two different points in time. The method is validated in a simulation study and tested empirically in the context of mean–variance portfolio optimization. In the latter case we find an enhanced realized portfolio return as well as a reduced portfolio risk compared with alternative approaches based on different strategies and estimators.  相似文献   

4.
The covariance matrix of asset returns can change drastically and generate huge losses in portfolio value under extreme conditions such as market interventions and financial crises. Estimation of the covariance matrix under a chaotic market is often a call to action in risk management. Nowadays, stress testing has become a standard procedure for many financial institutions to estimate the capital requirement for their portfolio holdings under various stress scenarios. A possible stress scenario is to adjust the covariance matrix to mimic the situation under an underlying stress event. It is reasonable that when some covariances are altered, other covariances should vary as well. Recently, Ng et al. proposed a unified approach to determine a proper correlation matrix which reflects the subjective views of correlations. However, this approach requires matrix vectorization and hence it is not computationally efficient for high dimensional matrices. Besides, it only adjusts correlations, but it is well known that high correlations often go together with high standard deviations during a crisis period. To address these limitations, we propose a Bayesian approach to covariance matrix adjustment by incorporating subjective views of covariances. Our approach is computationally efficient and can be applied to high dimensional matrices.  相似文献   

5.
In this article, we evaluate the performance of mutual funds in China between 2006 and 2014. We first estimate time-varying abnormal returns of each mutual fund using an active peer benchmark-augmented factor pricing model. An index of riskiness is then estimated and used to calculate the augmented performance measure (APM). By construction, the APM separates the managerial premium of the fund from systematic risk premium, so it is better than the economic performance measure. The APM incorporates information beyond the first and second moments of the distribution of fund abnormal return; therefore, it is more informative than the Sharpe ratio.  相似文献   

6.
A novel algorithm is developed for the problem of finding a low-rank correlation matrix nearest to a given correlation matrix. The algorithm is based on majorization and, therefore, it is globally convergent. The algorithm is computationally efficient, is straightforward to implement, and can handle arbitrary weights on the entries of the correlation matrix. A simulation study suggests that majorization compares favourably with competing approaches in terms of the quality of the solution within a fixed computational time. The problem of rank reduction of correlation matrices occurs when pricing a derivative dependent on a large number of assets, where the asset prices are modelled as correlated log-normal processes. Such an application mainly concerns interest rates.  相似文献   

7.
Klaus Derfuss 《Abacus》2015,51(2):238-278
Extant findings regarding how context variables relate to participative budgeting and the evaluative use of accounting performance measures (APM) are contradictory. Unlike previous reviews of such findings, this empirical article uses a meta‐analysis to examine the relations of context variables with participative budgeting or evaluative use of APM to determine (i) how the variables relate and (ii) which factors might cause between‐correlation variance, such as statistical artefacts or moderating influences of variable measures, sample selection, or industry differences. All meta‐analyses are based on rather small samples. Three groups of context variables emerge. First, some relate significantly and homogeneously to participative budgeting or evaluative use of APM; these direct relations should be considered explicitly in further studies. Second, for some variables, the relations are homogeneous but not significant, such that they are neither simple nor direct. Third, substantial variance exists in the correlations for some context variables; these relations are contingent on other influences. Industry differences and sample selection explain some inconsistencies in exploratory moderator analyses and should receive additional research attention.  相似文献   

8.
This paper examines two asymmetric stochastic volatility models used to describe the heavy tails and volatility dependencies found in most financial returns. The first is the autoregressive stochastic volatility model with Student's t-distribution (ARSV-t), and the second is the multifactor stochastic volatility (MFSV) model. In order to estimate these models, the analysis employs the Monte Carlo likelihood (MCL) method proposed by Sandmann and Koopman [Sandmann, G., Koopman, S.J., 1998. Estimation of stochastic volatility models via Monte Carlo maximum likelihood. Journal of Econometrics 87, 271–301.]. To guarantee the positive definiteness of the sampling distribution of the MCL, the nearest covariance matrix in the Frobenius norm is used. The empirical results using returns on the S&P 500 Composite and Tokyo stock price indexes and the Japan–US exchange rate indicate that the ARSV-t model provides a better fit than the MFSV model on the basis of Akaike information criterion (AIC) and the Bayes information criterion (BIC).  相似文献   

9.
We investigate whether primary market, original‐issue risk premiums on speculative‐grade debt are justified solely by expected defaults or whether these risk premiums also include other orthogonal risk components. Studies of secondary‐market holding period risk and return have hypothesized that risk premiums on speculative‐grade debt may be explained by bond‐ and equity‐related systematic risk and possibly other types of risk. Using an actuarial approach that considers contemporaneous correlation between default frequency and severity and first‐order serial correlation, we cannot reject the hypothesis that the entire original‐issue risk premium can be explained by expected default losses. This suggests that speculative‐grade bond primary markets efficiently price default risk and that other types of risk are priced as coincident as opposed to orthogonal risks.  相似文献   

10.
The simplest way to describe the dependence for a set of financial assets is their correlation matrix. This correlation matrix can be improper when it is specified element-wise. We describe a new method for obtaining a positive definite correlation matrix starting from an improper one. The expert's opinion and trust in each pairwise correlation is described by a beta distribution. Then, by combining these individual distributions, a joint distribution over the space of positive definite correlation matrices is obtained using Cholesky factorization, and its mode constitutes the new proper correlation matrix. The optimization is complemented by a visual representation of the entries that were most affected by the legalization procedure. We also sketch a Bayesian approach to the same problem.  相似文献   

11.
李晓林  曾毳毳 《金融论坛》2007,12(11):19-24
比较了保险风险和信贷风险的基本特征后可以发现,精算模型的方法和思路可以用来描述商业银行的信用风险.可以通过定性分析和定量分析相结合的方式,推导出适用的信用风险量化管理模型.在数据整理和检验的基础上,借鉴寿险精算的生命模型思想可以建立"正常-损失"模型,进而实现对贷款损失率的解释;借鉴非寿险准备金测算的方法,可以采用链梯法测算商业银行短期贷款业务应提取的贷款坏账准备金数额.在上述方法的基础上,可以将"正常-损失"模型与链梯法结合起来,测算长期贷款的坏账准备金.  相似文献   

12.
比较了保险风险和信贷风险的基本特征后可以发现,精算模型的方法和思路可以用来描述商业银行的信用风险。可以通过定性分析和定量分析相结合的方式,推导出适用的信用风险量化管理模型。在数据整理和检验的基础上,借鉴寿险精算的生命模型思想可以建立“正常-损失”模型,进而实现对贷款损失率的解释;借鉴非寿险准备金测算的方法,可以采用链梯法测算商业银行短期贷款业务应提取的贷款坏账准备金数额。在上述方法的基础上,可以将“正常-损失”模型与链梯法结合起来,测算长期贷款的坏账准备金。  相似文献   

13.
Frank Hartmann 《Abacus》2005,41(3):241-264
This article examines how task uncertainty, environmental uncertainty and tolerance for ambiguity (TFA) affect managerial opinions about the appropriateness of accounting performance measures (APM). Based on accounting and psychology literature, this study argues that task uncertainty and environmental uncertainty differ in their direct effects on the appropriateness of APM, and furthermore that the relationship between uncertainty and the appropriateness of APM is moderated by managers' TFA. Hypotheses are developed, and tested with data from a survey study among 250 managers in eleven organizations, using partial least squares (PLS). Overall, the results show that the two types of uncertainty have opposite effects on managers' opinions about the appropriateness of APM, and that these effects are moderated by TFA, which confirms expectations. No direct effect of TFA on the appropriateness of APM was found. Overall, these findings provide an explanation of the inconsistencies in the extant behavioural management accounting literature that has addressed the appropriateness of APM under uncertainty.  相似文献   

14.
This article evaluates the relative significance of research published in 16 risk, insurance, and actuarial journals by examining the frequency of citations in these risk, insurance, and actuarial journals and 16 of the leading finance journals during the years 1996 through 2000. First, the article provides the frequency with which each sample risk, insurance, and actuarial journal cites itself and the other sample journals so as to communicate the degree to which each journal's published research has had an influence on the other sample journals. Then the article divides the 16 journals into two groups: (1) the risk and insurance journal group, and (2) the actuarial journal group, and ranks them within their group based on their total number of citations, including and excluding self‐citations. A ranking within each group is based on the journals’ influence on a per article published basis. Finally, this study observes and reports on the most frequently cited articles from the sample risk, insurance, and actuarial journals.  相似文献   

15.
We introduce a risk-reduction-based procedure to identify a subset of funds with a resulting opportunity set that is at least as good as the original menu when short-sales are imposed. Relying on Wald tests for mean-variance spanning, we show that the better results for the subset can be explained by a higher concentration of covariance entries between its assets, ultimately leading to smaller Frobenius norms of the associated matrices. With data on US-defined contribution plans, where participants have limited financial literacy, tend to be overwhelmed and prefer to make decisions among fewer choices, we obtain a 75% average reduction.  相似文献   

16.
Multi-life models are useful in actuarial science for studying life contingency. Contingent probabilities are well-understood by most actuaries and are discussed extensively in the existing actuarial literature. However, the mean of a life in a multi-life model involving order of deaths is often found to be rather challenging to interpret by most actuaries who do not understand measure-theoretic probability. Standard textbooks on actuarial science or statistics do not elaborate on the correct interpretation of contingent means, leaving the actuaries at risk of making a blunder. This paper presents the correct interpretation both heuristically and rigorously using a non-measure-theoretic language, so that actuaries will be aware of some common misconceptions and avoid pitfalls in their work. The primary audience of this paper is practicing actuaries, actuarial students and actuarial educators. So we have given several actuarial applications. We hope that applied statisticians also will find this paper useful.  相似文献   

17.
The bibliographies of 17  risk  journals were evaluated to determine the relative influence of these  risk  journals on risk, insurance, and actuarial research published during the years 2001 through 2005. Tables are provided that show the frequency with which each of these journals cites itself and the other sample journals. The journals are ranked, within two groups (risk and insurance group and actuarial group), based on their total influence (total citations including and excluding self-citations) and their per article influence (per article citations including and excluding self-citations). Finally, the most frequently cited articles from each  risk  journal are reported.  相似文献   

18.
We propose a dynamic factor state–space model for high-dimensional covariance matrices of asset returns. It makes use of observed risk factors and assumes that the latent integrated joint covariance matrix of the assets and the factors is observed through their realized covariance matrix with a Wishart measurement density. For the latent integrated covariance matrix of the assets we impose a strict factor structure allowing for dynamic variation in the covariance matrices of the factors and the residual components as well as in the factor loadings. This factor structure translates into a factorization of the Wishart measurement density which facilitates statistical inference based on simple Bayesian MCMC procedures making the approach scalable w.r.t. the number of assets. An empirical application to realized covariance matrices for 60 NYSE traded stocks using the Fama–French factors and sector-specific factors represented by Exchange Traded Funds (ETFs) shows that the model performs very well in- and out of sample.  相似文献   

19.
We analyze covariance matrix estimation from the perspective of market risk management, where the goal is to obtain accurate estimates of portfolio risk across essentially all portfolios—even those with small standard deviations. We propose a simple but effective visualisation tool to assess bias across a wide range of portfolios. We employ a portfolio perspective to determine covariance matrix loss functions particularly suitable for market risk management. Proper regularisation of the covariance matrix estimate significantly improves performance. These methods are applied to credit default swaps, for which covariance matrices are used to set portfolio margin requirements for central clearing. Among the methods we test, the graphical lasso estimator performs particularly well. The graphical lasso and a hierarchical clustering estimator also yield economically meaningful representations of market structure through a graphical model and a hierarchy, respectively.  相似文献   

20.
《Journal of Banking & Finance》2004,28(11):2603-2639
Credit migration matrices are cardinal inputs to many risk management applications; their accurate estimation is therefore critical. We explore two approaches: cohort and two variants of duration – one imposing, the other relaxing time homogeneity – and the resulting differences, both statistically through matrix norms and economically using a credit portfolio model. We propose a new metric for comparing these matrices based on singular values and apply it to credit rating histories of S&P rated US firms from 1981–2002. We show that the migration matrices have been increasing in “size” since the mid-1990s, with 2002 being the “largest” in the sense of being the most dynamic. We develop a testing procedure using bootstrap techniques to assess statistically the differences between migration matrices as represented by our metric. We demonstrate that it can matter substantially which estimation method is chosen: economic credit risk capital differences implied by different estimation techniques can be as large as differences between economic regimes, recession vs. expansion. Ignoring the efficiency gain inherent in the duration methods by using the cohort method instead is more damaging than imposing a (possibly false) assumption of time homogeneity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号