首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
刘利  董惠  王栋  薛宏民 《金卡工程》2006,10(8):44-48
人像检测技术是人脸识别系统的重要组成部分。本文给出了一种基于小波系数子集的人像检测的快速算法,使用多个检测器处理包含各种角度的人脸图像,及基于量化小波系数子集的视觉属性表示方法,最后使用相关度算法向最佳匹配位置逼近。实验证明该方法是一种行之有效的快速的人像检测算法。  相似文献   

2.
Abstract

In this paper we consider the valuation of Bermudan callable derivatives with multiple exercise rights. We present in this context a new primal–dual linear Monte Carlo algorithm that allows for efficient simulation of the lower and upper price bounds without using nested simulations (hence the terminology). The algorithm is essentially an extension of the primal–dual Monte Carlo algorithm for standard Bermudan options proposed by Schoenmakers et al. [SIAM J. Finance Math., 2013, 4, 86–116] to the case of multiple exercise rights. In particular, the algorithm constructs upwardly a system of dual martingales to be plugged into the dual representation of Schoenmakers. At each level, the respective martingale is constructed via a backward regression procedure starting at the last exercise date. The thus constructed martingales are finally used to compute an upper price bound. The algorithm also provides approximate continuation functions that may be used to construct a price lower bound. The algorithm is applied to the pricing of flexible caps in a Hull and White model setup. The simple model choice allows for comparison of the computed price bounds with the exact price obtained by means of a trinomial tree implementation. As a result, we obtain tight price bounds for the considered application. Moreover, the algorithm is generically designed for multi-dimensional problems and is tractable to implement.  相似文献   

3.
This article applies machine learning techniques to credibility theory and proposes a regression-tree-based algorithm to integrate covariate information into credibility premium prediction. The recursive binary algorithm partitions a collective of individual risks into mutually exclusive subcollectives and applies the classical Bühlmann-Straub credibility formula for the prediction of individual net premiums. The algorithm provides a flexible way to integrate covariate information into individual net premiums prediction. It is appealing for capturing nonlinear and/or interaction covariate effects. It automatically selects influential covariate variables for premium prediction and requires no additional ex ante variable selection procedure. The superiority in prediction accuracy of the proposed algorithm is demonstrated by extensive simulation studies. The proposed method is applied to the U.S. Medicare data for illustration purposes.  相似文献   

4.
A novel algorithm is developed for the problem of finding a low-rank correlation matrix nearest to a given correlation matrix. The algorithm is based on majorization and, therefore, it is globally convergent. The algorithm is computationally efficient, is straightforward to implement, and can handle arbitrary weights on the entries of the correlation matrix. A simulation study suggests that majorization compares favourably with competing approaches in terms of the quality of the solution within a fixed computational time. The problem of rank reduction of correlation matrices occurs when pricing a derivative dependent on a large number of assets, where the asset prices are modelled as correlated log-normal processes. Such an application mainly concerns interest rates.  相似文献   

5.
A self-consistent algorithm will be proposed to non-parametrically estimate the cause-specific cumulative incidence functions (CIFs) in an interval censored, multiple decrement context. More specifically, the censoring mechanism will be assumed to be a mixture of case 2 interval-censored data with the additional possibility of exact observations. The proposed algorithm is a generalization of the classical univariate algorithms of Efron and Turnbull. However, unlike any previous non-parametric models proposed in the literature to date, the algorithm will explicitly allow for the possibility of any combination of masked modes of failure, where failure is known only to occur due to a subset from the set of all possible causes. A simulation study is also conducted to demonstrate the consistency of the estimators of the CIFs produced by the proposed algorithm, as well as to explore the effect of masking. The paper concludes by applying the method to masked mortality data obtained for Pueblo County, CO, for three risks: death by cancer; cardiovascular failure; or other.  相似文献   

6.
In this article we propose a novel approach to reduce the computational complexity of the dual method for pricing American options. We consider a sequence of martingales that converges to a given target martingale and decompose the original dual representation into a sum of representations that correspond to different levels of approximation to the target martingale. By next replacing in each representation true conditional expectations with their Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte Carlo algorithm. The analysis of this algorithm reveals that the computational complexity of getting the corresponding target upper bound, due to the target martingale, can be significantly reduced. In particular, it turns out that using our new approach, we may construct a multilevel version of the well-known nested Monte Carlo algorithm of Andersen and Broadie (Manag. Sci. 50:1222–1234, 2004) that is, regarding complexity, virtually equivalent to a non-nested algorithm. The performance of this multilevel algorithm is illustrated by a numerical example.  相似文献   

7.
本文从一个新的视角来研究Markowitz均值—方差模型。通过将Markowitz均值—方差模型表述为约束最小二乘问题,继而使用约束最小二乘问题的算法研究了协方差矩阵正定和半正定时模型的求解问题,我们给出了计算投资组合有效前沿及最小方差组合的新算法。实证分析表明:最小二乘算法在计算稳定和计算速度方面优于传统算法。  相似文献   

8.
Asian options are a kind of path-dependent derivative. How to price such derivatives efficiently and accurately has been a long-standing research and practical problem. This paper proposes a novel multiresolution (MR) trinomial lattice for pricing European- and American-style arithmetic Asian options. Extensive experimental work suggests that this new approach is both efficient and more accurate than existing methods. It also computes the numerical delta accurately. The MR algorithm is exact as no errors are introduced during backward induction. In fact, it may be the first exact discrete-time algorithm to break the exponential-time barrier. The MR algorithm is guaranteed to converge to the continuous-time value. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
This paper presents a new model for dynamically deciding when, how, and to what extent soft constraints should be relaxed. The first part of the model is a depth-first search algorithm and a best-first repair algorithm which can generate partial schedules quickly. The second part is an iterative relaxation algorithm which can augment the generated partial schedules by slightly relaxing potentially relaxable constraints (i.e. soft constraints). The model guarantees that (1) a soft constraint will be relaxed only when no backtrack (repair) can be made within a time limit, (2) the relaxation of soft constraints can always deepen the search tree, and (3) the relaxation will only be made at dead nodes, and when the search algorithm can be continued the relaxed soft constraints will return to their initial states. The model has been successfully applied to two staff scheduling problems, dispatcher scheduling problem and crew scheduling problem. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
The Markowitz critical line method for mean–variance portfolio construction has remained highly influential today, since its introduction to the finance world six decades ago. The Markowitz algorithm is so versatile and computationally efficient that it can accommodate any number of linear constraints in addition to full allocations of investment funds and disallowance of short sales. For the Markowitz algorithm to work, the covariance matrix of returns, which is positive semi-definite, need not be positive definite. As a positive semi-definite matrix may not be invertible, it is intriguing that the Markowitz algorithm always works, although matrix inversion is required in each step of the iterative procedure involved. By examining some relevant algebraic features in the Markowitz algorithm, this paper is able to identify and explain intuitively the consequences of relaxing the positive definiteness requirement, as well as drawing some implications from the perspective of portfolio diversification. For the examination, the sample covariance matrix is based on insufficient return observations and is thus positive semi-definite but not positive definite. The results of the examination can facilitate a better understanding of the inner workings of the highly sophisticated Markowitz approach by the many investors who use it as a tool to assist portfolio decisions and by the many students who are introduced pedagogically to its special cases.  相似文献   

11.
This paper demonstrates a tractable and efficient way of calibrating a multiscale exponential Ornstein–Uhlenbeck stochastic volatility model including a correlation between the asset return and its volatility. As opposed to many contributions where this correlation is assumed to be null, this framework allows one to describe the leverage effect widely observed in equity markets. The resulting model is non-exponential and driven by a degenerate noise, thus requiring a high level of care in designing the estimation algorithm. The way this difficulty is overcome provides guidelines concerning the development of an estimation algorithm in a non-standard framework. The authors propose using a block-type expectation maximization algorithm along with particle smoothing. This method results in an accurate calibration process able to identify up to three timescale factors. Furthermore, we introduce an intuitive heuristic which can be used to choose the number of factors.  相似文献   

12.
13.
《Quantitative Finance》2013,13(6):451-457
Abstract

This paper presents a new algorithm to calibrate the option pricing model, i.e. the algorithm that recovers the implied local volatility function from market option prices in the optimal control framework. A unique optimal control is shown to exist. Our algorithm is well-posed. Our numerical experiments show that, with the help of the techniques developed in the field of optimal control, the local volatility function is recovered very well.  相似文献   

14.
The paper investigates the term structure of interest rates imposed by equilibrium in a production economy consisting of participants with heterogeneous preferences. Consumption is restricted to an arbitrary number of discrete times. The paper contains an exact solution to market equilibrium and provides an explicit constructive algorithm for determining the state price density process. The convergence of the algorithm is proven. Interest rates and their behavior are given as a function of economic variables.  相似文献   

15.
The authors report on the construction of a new algorithm for the weak approximation of stochastic differential equations. In this algorithm, an ODE-valued random variable whose average approximates the solution of the given stochastic differential equation is constructed by using the notion of free Lie algebras. It is proved that the classical Runge–Kutta method for ODEs is directly applicable to the ODE drawn from the random variable. In a numerical experiment, this is applied to the problem of pricing Asian options under the Heston stochastic volatility model. Compared with some other methods, this algorithm is significantly faster. This research was partly supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research (C), 15540110, 2003 and 18540113, 2006, the 21st century COE program at Graduate School of Mathematical Sciences, the University of Tokyo, and JSPS Core-to-Core Program 18005.  相似文献   

16.
《Quantitative Finance》2013,13(1):64-69
Abstract

How to model the dependence between defaults in a portfolio subject to credit risk is a question of great importance. The infectious default model of Davis and Lo offers a way to model the dependence. Every company defaulting in this model may ‘infect’ another company causing it to default. An unsolved question, however, is how to aggregate independent sectors, since a naive straightforward computation quickly gets cumbersome, even when homogeneous assumptions are made. Here, two algorithms are derived that overcome the computational problem and further make it possible to use different exposures and probabilities of default for each sector. For an ‘outbreak’ of defaults to occur in a sector, at least one company has to default by itself. This fact is used in the derivations of the two algorithms. The first algorithm is derived from the probability generating function of the total credit loss in each sector and the fact that the outbreaks are independent Bernoulli random variables. The second algorithm is an approximation and is based on a Poisson number of outbreaks in each sector. This algorithm is less cumbersome and more numerically stable, but still seems to work well in a realistic setting.  相似文献   

17.
We propose a multi-stock automated trading system that relies on a layered structure consisting of a machine learning algorithm, an online learning utility, and a risk management overlay. Alternating decision tree (ADT), which is implemented with Logitboost, was chosen as the underlying algorithm. One of the strengths of our approach is that the algorithm is able to select the best combination of rules derived from well-known technical analysis indicators and is also able to select the best parameters of the technical indicators. Additionally, the online learning layer combines the output of several ADTs and suggests a short or long position. Finally, the risk management layer can validate the trading signal when it exceeds a specified non-zero threshold and limit the application of our trading strategy when it is not profitable. We test the expert weighting algorithm with data of 100 randomly selected companies of the S&P 500 index during the period 2003–2005. We find that this algorithm generates abnormal returns during the test period. Our experiments show that the boosting approach is able to improve the predictive capacity when indicators are combined and aggregated as a single predictor. Even more, the combination of indicators of different stocks demonstrated to be adequate in order to reduce the use of computational resources, and still maintain an adequate predictive capacity.  相似文献   

18.
We formulate a mean-variance portfolio selection problem that accommodates qualitative input about expected returns and provide an algorithm that solves the problem. This model and algorithm can be used, for example, when a portfolio manager determines that one industry will benefit more from a regulatory change than another but is unable to quantify the degree of difference. Qualitative views are expressed in terms of linear inequalities among expected returns. Our formulation builds on the Black-Litterman model for portfolio selection. The algorithm makes use of an adaptation of the hit-and-run method for Markov chain Monte Carlo simulation. We also present computational results that illustrate advantages of our approach over alternative heuristic methods for incorporating qualitative input.  相似文献   

19.
CreditRisk+ is an influential and widely implemented model of portfolio credit risk. As a close variant of models long used for insurance risk, it retains the analytical tractability for which the insurance models were designed. Value-at-risk (VaR) can be obtained via a recurrence-rule algorithm, so Monte Carlo simulation can be avoided. Little recognized, however, is that the algorithm is fragile. Under empirically realistic conditions, numerical error can accumulate in the execution of the recurrence rule and produce wildly inaccurate results for VaR.This paper provides new tools for users of CreditRisk+ based on the cumulant generating function (cgf) of the portfolio loss distribution. Direct solution for the moments of the loss distribution from the cgf is almost instantaneous and is computationally robust. Thus, the moments provide a convenient, quick and independent diagnostic on the implementation and execution of the standard solution algorithm. Better still, with the cgf in hand we have an alternative to the standard algorithm. I show how tail percentiles of the loss distribution can be calculated quickly and easily by saddlepoint approximation. On a large and varied sample of simulated test portfolios, I find a natural complementarity between the two algorithms: Saddlepoint approximation is accurate and robust in those situations for which the standard algorithm performs least well, and is less accurate in those situations for which the standard algorithm is fast and reliable.  相似文献   

20.
Abstract

We present an approach based on matrix-analytic methods to find moments of the time of ruin in Markovian risk models. The approach is applicable when claims occur according to a Markovian arrival process (MAP) and claim sizes are phase distributed with parameters that depend on the state of the MAP. The method involves the construction of a sample-path-equivalent Markov-modulated fluid flow for the risk model. We develop an algorithm for moments of the time of ruin and prove the algorithm is convergent. Examples show that the proposed approach is computationally stable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号