首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
This paper considers the problem of risk sharing, where a coalition of homogeneous agents, each bearing a random cost, aggregates their costs, and shares the value‐at‐risk of such a risky position. Due to limited distributional information in practice, the joint distribution of agents' random costs is difficult to acquire. The coalition, being aware of the distributional ambiguity, thus evaluates the worst‐case value‐at‐risk within a commonly agreed ambiguity set of the possible joint distributions. Through the lens of cooperative game theory, we show that this coalitional worst‐case value‐at‐risk is subadditive for the popular ambiguity sets in the distributionally robust optimization literature that are based on (i) convex moments or (ii) Wasserstein distance to some reference distributions. In addition, we propose easy‐to‐compute core allocation schemes to share the worst‐case value‐at‐risk. Our results can be readily extended to sharing the worst‐case conditional value‐at‐risk under distributional ambiguity.  相似文献   

2.
It is well known that purely structural models of default cannot explain short‐term credit spreads, while purely intensity‐based models lead to completely unpredictable default events. Here we introduce a hybrid model of default, in which a firm enters a “distressed” state once its nontradable credit worthiness index hits a critical level. The distressed firm then defaults upon the next arrival of a Poisson process. To value defaultable bonds and credit default swaps (CDSs), we introduce the concept of robust indifference pricing. This paradigm incorporates both risk aversion and model uncertainty. In robust indifference pricing, the optimization problem is modified to include optimizing over a set of candidate measures, in addition to optimizing over trading strategies, subject to a measure dependent penalty. Using our model and valuation framework, we derive analytical solutions for bond yields and CDS spreads, and find that while ambiguity aversion plays a similar role to risk aversion, it also has distinct effects. In particular, ambiguity aversion allows for significant short‐term spreads.  相似文献   

3.
Expected utility models in portfolio optimization are based on the assumption of complete knowledge of the distribution of random returns. In this paper, we relax this assumption to the knowledge of only the mean, covariance, and support information. No additional restrictions on the type of distribution such as normality is made. The investor’s utility is modeled as a piecewise‐linear concave function. We derive exact and approximate optimal trading strategies for a robust (maximin) expected utility model, where the investor maximizes his worst‐case expected utility over a set of ambiguous distributions. The optimal portfolios are identified using a tractable conic programming approach. Extensions of the model to capture asymmetry using partitioned statistics information and box‐type uncertainty in the mean and covariance matrix are provided. Using the optimized certainty equivalent framework, we provide connections of our results with robust or ambiguous convex risk measures, in which the investor minimizes his worst‐case risk under distributional ambiguity. New closed‐form results for the worst‐case optimized certainty equivalent risk measures and optimal portfolios are provided for two‐ and three‐piece utility functions. For more complicated utility functions, computational experiments indicate that such robust approaches can provide good trading strategies in financial markets.  相似文献   

4.
MODEL UNCERTAINTY AND ITS IMPACT ON THE PRICING OF DERIVATIVE INSTRUMENTS   总被引:4,自引:0,他引:4  
Rama  Cont 《Mathematical Finance》2006,16(3):519-547
Uncertainty on the choice of an option pricing model can lead to "model risk" in the valuation of portfolios of options. After discussing some properties which a quantitative measure of model uncertainty should verify in order to be useful and relevant in the context of risk management of derivative instruments, we introduce a quantitative framework for measuring model uncertainty in the context of derivative pricing. Two methods are proposed: the first method is based on a coherent risk measure compatible with market prices of derivatives, while the second method is based on a convex risk measure. Our measures of model risk lead to a premium for model uncertainty which is comparable to other risk measures and compatible with observations of market prices of a set of benchmark derivatives. Finally, we discuss some implications for the management of "model risk."  相似文献   

5.
This paper provides a coherent method for scenario aggregation addressing model uncertainty. It is based on divergence minimization from a reference probability measure subject to scenario constraints. An example from regulatory practice motivates the definition of five fundamental criteria that serve as a basis for our method. Standard risk measures, such as value‐at‐risk and expected shortfall, are shown to be robust with respect to minimum divergence scenario aggregation. Various examples illustrate the tractability of our method.  相似文献   

6.
We propose to interpret distribution model risk as sensitivity of expected loss to changes in the risk factor distribution, and to measure the distribution model risk of a portfolio by the maximum expected loss over a set of plausible distributions defined in terms of some divergence from an estimated distribution. The divergence may be relative entropy or another f‐divergence or Bregman distance. We use the theory of minimizing convex integral functionals under moment constraints to give formulae for the calculation of distribution model risk and to explicitly determine the worst case distribution from the set of plausible distributions. We also evaluate related risk measures describing divergence preferences.  相似文献   

7.
The problem of robust utility maximization in an incomplete market with volatility uncertainty is considered, in the sense that the volatility of the market is only assumed to lie between two given bounds. The set of all possible models (probability measures) considered here is nondominated. We propose studying this problem in the framework of second‐order backward stochastic differential equations (2BSDEs for short) with quadratic growth generators. We show for exponential, power, and logarithmic utilities that the value function of the problem can be written as the initial value of a particular 2BSDE and prove existence of an optimal strategy. Finally, several examples which shed more light on the problem and its links with the classical utility maximization one are provided. In particular, we show that in some cases, the upper bound of the volatility interval plays a central role, exactly as in the option pricing problem with uncertain volatility models.  相似文献   

8.
The optimized certainty equivalent (OCE) is a decision theoretic criterion based on a utility function, that was first introduced by the authors in 1986. This paper re-examines this fundamental concept, studies and extends its main properties, and puts it in perspective to recent concepts of risk measures. We show that the negative of the OCE naturally provides a wide family of risk measures that fits the axiomatic formalism of convex risk measures. Duality theory is used to reveal the link between the OCE and the φ-divergence functional (a generalization of relative entropy), and allows for deriving various variational formulas for risk measures. Within this interpretation of the OCE, we prove that several risk measures recently analyzed and proposed in the literature (e.g., conditional value of risk, bounded shortfall risk) can be derived as special cases of the OCE by using particular utility functions. We further study the relations between the OCE and other certainty equivalents, providing general conditions under which these can be viewed as coherent/convex risk measures. Throughout the paper several examples illustrate the flexibility and adequacy of the OCE for building risk measures.  相似文献   

9.
We consider settings in which the distribution of a multivariate random variable is partly ambiguous. We assume the ambiguity lies on the level of the dependence structure, and that the marginal distributions are known. Furthermore, a current best guess for the distribution, called reference measure, is available. We work with the set of distributions that are both close to the given reference measure in a transportation distance (e.g., the Wasserstein distance), and additionally have the correct marginal structure. The goal is to find upper and lower bounds for integrals of interest with respect to distributions in this set. The described problem appears naturally in the context of risk aggregation. When aggregating different risks, the marginal distributions of these risks are known and the task is to quantify their joint effect on a given system. This is typically done by applying a meaningful risk measure to the sum of the individual risks. For this purpose, the stochastic interdependencies between the risks need to be specified. In practice, the models of this dependence structure are however subject to relatively high model ambiguity. The contribution of this paper is twofold: First, we derive a dual representation of the considered problem and prove that strong duality holds. Second, we propose a generally applicable and computationally feasible method, which relies on neural networks, in order to numerically solve the derived dual problem. The latter method is tested on a number of toy examples, before it is finally applied to perform robust risk aggregation in a real‐world instance.  相似文献   

10.
In this paper, we study the aggregate risk of inhomogeneous risks with dependence uncertainty, evaluated by a generic risk measure. We say that a pair of risk measures is asymptotically equivalent if the ratio of the worst‐case values of the two risk measures is almost one for the sum of a large number of risks with unknown dependence structure. The study of asymptotic equivalence is particularly important for a pair of a noncoherent risk measure and a coherent risk measure, as the worst‐case value of a noncoherent risk measure under dependence uncertainty is typically difficult to obtain. The main contribution of this paper is to establish general asymptotic equivalence results for the classes of distortion risk measures and convex risk measures under different mild conditions. The results implicitly suggest that it is only reasonable to implement a coherent risk measure for the aggregation of a large number of risks with uncertainty in the dependence structure, a relevant situation for risk management practice.  相似文献   

11.
In this study, we investigate whether the five‐factor model by Fama and French (2015) explains well the pricing structure of stocks with long‐run data for Japan. We conduct standard cross‐section asset pricing tests and examine the additional explanatory power of the new Fama and French factors; robust‐minus‐weak profitability factor and conservative‐minus‐aggressive investment factor. We find that robust‐minus‐weak and the conservative‐minus‐aggressive factors are not statistically significant when we conduct generalized method of moments (GMM) tests with the Hansen–Jagannathan distance measure. Thus, we conclude that the original version of the Fama and French five‐factor model is not the best benchmark pricing model for Japanese data during our sampling period from the year 1978 to the year 2014.  相似文献   

12.
One of the well‐known approaches to the problem of option pricing is a minimization of the global risk, considered as the expected quadratic net loss. In the paper, a multidimensional variant of the problem is studied. To obtain the existence of the variance‐optimal hedging strategy in a model without transaction costs, we can apply the result of Monat and Stricker. Another possibility is a generalization of the nondegeneracy condition that appeared in a paper of Schweizer, in which a one‐dimensional problem is solved. The relationship between the two approaches is shown. A more difficult problem is the existence of an optimal solution in the model with transaction costs. A sufficient condition in a multidimensional case is formulated.  相似文献   

13.
The alpha‐maxmin model is a prominent example of preferences under Knightian uncertainty as it allows to distinguish ambiguity and ambiguity attitude. These preferences are dynamically inconsistent for nontrivial versions of alpha. In this paper, we derive a recursive, dynamically consistent version of the alpha‐maxmin model. In the continuous‐time limit, the resulting dynamic utility function can be represented as a convex mixture between worst and best case, but now at the local, infinitesimal level. We study the properties of the utility function and provide an Arrow–Pratt approximation of the static and dynamic certainty equivalent. We then derive a consumption‐based capital asset pricing formula and study the implications for derivative valuation under indifference pricing.  相似文献   

14.
This paper discusses the problem of hedging not perfectly replicable contingent claims using the numéraire portfolio. The proposed concept of benchmarked risk minimization leads beyond the classical no‐arbitrage paradigm. It provides in incomplete markets a generalization of the pricing under classical risk minimization, pioneered by Föllmer, Sondermann, and Schweizer. The latter relies on a quadratic criterion, requests square integrability of claims and gains processes, and relies on the existence of an equivalent risk‐neutral probability measure. Benchmarked risk minimization avoids these restrictive assumptions and provides symmetry with respect to all primary securities. It employs the real‐world probability measure and the numéraire portfolio to identify the minimal possible price for a contingent claim. Furthermore, the resulting benchmarked (i.e., numéraire portfolio denominated) profit and loss is only driven by uncertainty that is orthogonal to benchmarked‐traded uncertainty, and forms a local martingale that starts at zero. Consequently, sufficiently different benchmarked profits and losses, when pooled, become asymptotically negligible through diversification. This property makes benchmarked risk minimization the least expensive method for pricing and hedging diversified pools of not fully replicable benchmarked contingent claims. In addition, when hedging it incorporates evolving information about nonhedgeable uncertainty, which is ignored under classical risk minimization.  相似文献   

15.
We develop a general framework for statically hedging and pricing European‐style options with nonstandard terminal payoffs, which can be applied to mixed static–dynamic and semistatic hedges for many path‐dependent exotic options including variance swaps and barrier options. The goal is achieved by separating the hedging and pricing problems to obtain replicating strategies. Once prices have been obtained for a set of basis payoffs, the pricing and hedging of financial securities with arbitrary payoff functions is accomplished by computing a set of “hedge coefficients” for that security. This method is particularly well suited for pricing baskets of options simultaneously, and is robust to discontinuities of payoffs. In addition, the method enables a systematic comparison of the value of a payoff (or portfolio) across a set of competing model specifications with implications for security design.  相似文献   

16.
We study convex risk measures describing the upper and lower bounds of a good deal bound, which is a subinterval of a no‐arbitrage pricing bound. We call such a convex risk measure a good deal valuation and give a set of equivalent conditions for its existence in terms of market. A good deal valuation is characterized by several equivalent properties and in particular, we see that a convex risk measure is a good deal valuation only if it is given as a risk indifference price. An application to shortfall risk measure is given. In addition, we show that the no‐free‐lunch (NFL) condition is equivalent to the existence of a relevant convex risk measure, which is a good deal valuation. The relevance turns out to be a condition for a good deal valuation to be reasonable. Further, we investigate conditions under which any good deal valuation is relevant.  相似文献   

17.
We investigate the pricing–hedging duality for American options in discrete time financial models where some assets are traded dynamically and others, for example, a family of European options, only statically. In the first part of the paper, we consider an abstract setting, which includes the classical case with a fixed reference probability measure as well as the robust framework with a nondominated family of probability measures. Our first insight is that, by considering an enlargement of the space, we can see American options as European options and recover the pricing–hedging duality, which may fail in the original formulation. This can be seen as a weak formulation of the original problem. Our second insight is that a duality gap arises from the lack of dynamic consistency, and hence that a different enlargement, which reintroduces dynamic consistency is sufficient to recover the pricing–hedging duality: It is enough to consider fictitious extensions of the market in which all the assets are traded dynamically. In the second part of the paper, we study two important examples of the robust framework: the setup of Bouchard and Nutz and the martingale optimal transport setup of Beiglböck, Henry‐Labordère, and Penkner, and show that our general results apply in both cases and enable us to obtain the pricing–hedging duality for American options.  相似文献   

18.
Since risky positions in multivariate portfolios can be offset by various choices of capital requirements that depend on the exchange rules and related transaction costs, it is natural to assume that the risk measures of random vectors are set‐valued. Furthermore, it is reasonable to include the exchange rules in the argument of the risk measure and so consider risk measures of set‐valued portfolios. This situation includes the classical Kabanov's transaction costs model, where the set‐valued portfolio is given by the sum of a random vector and an exchange cone, but also a number of further cases of additional liquidity constraints. We suggest a definition of the risk measure based on calling a set‐valued portfolio acceptable if it possesses a selection with all individually acceptable marginals. The obtained selection risk measure is coherent (or convex), law invariant, and has values being upper convex closed sets. We describe the dual representation of the selection risk measure and suggest efficient ways of approximating it from below and from above. In the case of Kabanov's exchange cone model, it is shown how the selection risk measure relates to the set‐valued risk measures considered by Kulikov (2008, Theory Probab. Appl. 52, 614–635), Hamel and Heyde (2010, SIAM J. Financ. Math. 1, 66–95), and Hamel, Heyde, and Rudloff (2013, Math. Financ. Econ. 5, 1–28).  相似文献   

19.
This paper presents a new measure of skewness, skewness‐aware deviation, that can be linked to prospective satisficing risk measures and tail risk measures such as Value‐at‐Risk. We show that this measure of skewness arises naturally also when one thinks of maximizing the certainty equivalent for an investor with a negative exponential utility function, thus bringing together the mean‐risk, expected utility, and prospective satisficing measures frameworks for an important class of investor preferences. We generalize the idea of variance and covariance in the new skewness‐aware asset pricing and allocation framework. We show via computational experiments that the proposed approach results in improved and intuitively appealing asset allocation when returns follow real‐world or simulated skewed distributions. We also suggest a skewness‐aware equivalent of the classical Capital Asset Pricing Model beta, and study its consistency with the observed behavior of the stocks traded at the NYSE between 1963 and 2006.  相似文献   

20.
We introduce a new approach for the numerical pricing of American options. The main idea is to choose a finite number of suitable excessive functions (randomly) and to find the smallest majorant of the gain function in the span of these functions. The resulting problem is a linear semi‐infinite programming problem, that can be solved using standard algorithms. This leads to good upper bounds for the original problem. For our algorithms no discretization of space and time and no simulation is necessary. Furthermore it is applicable even for high‐dimensional problems. The algorithm provides an approximation of the value not only for one starting point, but for the complete value function on the continuation set, so that the optimal exercise region and, for example, the Greeks can be calculated. We apply the algorithm to (one‐ and) multidimensional diffusions and show it to be fast and accurate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号