首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

1. In 1905 Charlier outlined some methods for the expansion of functions in series. 1 C. V. L. Charlier, Über die Darstellung willkürlicher Funktionen (Meddelanden från Lunds Observatorium, Ser. I, nr 27). Particularly he was dealing with frequency functions, but the method has a more general application. As is well known there were two kinds of developments considered, namely in terms of the differentials and in terms of the differences of a conveniently chosen developing function. The outstanding examples are — respectively — the expansions of the so called types A and B. The difference series has later gained a special attention by its deduction being attached to the theory for generating functions. 2 I. V. Uspensky, On Ch. Jordan's Series for Probability (Annals of Mathematics, Vol. 32, 1931). The true pivotal function in this respect seems, however, to be the moment generating function. In the following notes it will be shown that the differential series as well as difference series built up by the advancing and the central differences are obtainable in a similar way. By employing some convenient cumulants the different expansions can be written down compactly in symbolic forms which reveal their mutual formal relations. It will further be observed that Charlier's method of expansion is the inversion of a method indicated by Abel.  相似文献   

2.
Abstract

The aim of this paper is to analyse two functions that are of general interest in the collective risk theory, namely F, the distribution function of the total amount of claims, and II, the Stop Loss premium. Section 2 presents certain basic formulae. Sections 17-18 present five claim distributions. Corresponding to these claim distributions, the functions F and II were calculated under various assumptions as to the distribution of the number of claims. These calculations were performed on an electronic computer and the numerical method used for this purpose is presented in sections 9, 19 and 20 under the name of the C-method which method has the advantage of furnishing upper and lower limits of the quantities under estimation. The means of these limits, in the following regarded as the “exact” results, are given in Tables 4-20. Sections 11-16 present certain approximation methods. The N-method of section 11 is an Edgeworth expansion, while the G-method given in section 12 is an approximation by a Pearson type III curve. The methods presented in sections 13-16, and denoted AI-A4, are all applications and modifications of the Esscher method. These approximation methods have been applied for the calculation of F and II in the cases mentioned above in which “exact” results were obtained. The results are given in Tables 4-20. The object of this investigation was to obtain information as to the precision of the approximation methods in question, and to compare their relative merits. These results arc discussed in sections 21-24.  相似文献   

3.
Abstract

In the new Rules of Calculation of the Life Insurance Co. Framtiden, the old idea of the continuous mode of payment has been realized. In practice, this method only signifies, that the premium is to be restored for the time elapsed after the moment of death. Theoretically, it makes unnecessary the computation of premiums by the aid of (exact or approximate) yearly, half-yearly, quarterly and monthly annuities, continuous annuities being solely needed. It may perhaps be of interest to give a more detailed account of the method employed.  相似文献   

4.
This paper extends the Fourier-cosine (COS) method to the pricing and hedging of variable annuities embedded with guaranteed minimum withdrawal benefit (GMWB) riders. The COS method facilitates efficient computation of prices and hedge ratios of the GMWB riders when the underlying fund dynamics evolve under the influence of the general class of Lévy processes. Formulae are derived to value the contract at each withdrawal date using a backward recursive dynamic programming algorithm. Numerical comparisons are performed with results presented in Bacinello et al. [Scand. Actuar. J., 2014, 1–20], and Luo and Shevchenko [Int. J. Financ. Eng., 2014, 2, 1–24], to confirm the accuracy of the method. The efficiency of the proposed method is assessed by making comparisons with the approach presented in Bacinello et al. [op. cit.]. We find that the COS method presents highly accurate results with notably fast computational times. The valuation framework forms the basis for GMWB hedging. A local risk minimisation approach to hedging intra-withdrawal date risks is developed. A variety of risk measures are considered for minimisation in the general Lévy framework. While the second moment and variance have been considered in existing literature, we show that the Value-at-Risk (VaR) may also be of interest as a risk measure to minimise risk in variable annuities portfolios.  相似文献   

5.
Book Review     
1. Introductory. In the empirical study of demand behaviour, as in other investigations into economic factors and their interrelations, the main statistical tool has been the mean square (m. sq. 1 Following H. Cramér 1, we adopt this abbreviation for the classical least squares regression method. ) method of regression. 2 For a survey, see the standard treatise of H. Schultz 1. It is equally well known, however, that this method has met much criticism. No doubt there are still many questions to discuss in this field.  相似文献   

6.
We present a number of related comparison results, which allow one to compare moment explosion times, moment generating functions and critical moments between rough and non-rough Heston models of stochastic volatility. All results are based on a comparison principle for certain non-linear Volterra integral equations. Our upper bound for the moment explosion time is different from the bound introduced by Gerhold, Gerstenecker and Pinter [Moment explosions in the rough Heston model. Decisions in Economics and Finance, 2019, 42, 575–608] and tighter for typical parameter values. The results can be directly transferred to a comparison principle for the asymptotic slope of implied variance between rough and non-rough Heston models. This principle shows that the ratio of implied variance slopes in the rough versus non-rough Heston model increases at least with power-law behavior for small maturities.  相似文献   

7.
《Quantitative Finance》2013,13(3):212-219
Abstract

In this paper, we propose a data and digital-contracts driven (DDCD) method for pricing various complex options. The DDCD method is a combination of nonparametric and parametric methods. In general, nonparametric data driven methods use observed data as training data of a learning network directly. Different from these, in the proposed DDCD method, some European-style digital contracts (DCs) of the underlying assets are added as auxiliary information to guide the learning process of the pricing formula. The DCs can be obtained by using the observed data according to parametric methods. Thus, the DCs are actually used as the hints of the pricing formula, and then the DDCD method has superior pricing accuracy to the common data driven method in practical applications. Some Monte Carlo simulation experiments are performed and the results demonstrate that the proposed method not only has the advantages of generalization and superior accuracy, as the non-parametric method has, but also has the property of robustness to financial data with noise, as the parametric method has.  相似文献   

8.

A new formulation of Gompertz' law of mortality is proposed. Explicit formulas for moment generating function and moments of this formulation are derived. The new formulation is applied to the theory of heterogeneous populations and formulas for stop-loss transformations are derived.  相似文献   

9.
Abstract

This case study illustrates the analysis of two possible regression models for bivariate claims data. Estimates or forecasts of loss distributions under these two models are developed using two methods of analysis: (1) maximum likelihood estimation and (2) the Bayesian method. These methods are applied to two data sets consisting of 24 and 1,500 paired observations, respectively. The Bayesian analyses are implemented using Markov chain Monte Carlo via WinBUGS, as discussed in Scollnik (2001). A comparison of the analyses reveals that forecasted total losses can be dramatically underestimated by the maximum likelihood estimation method because it ignores the inherent parameter uncertainty.  相似文献   

10.
Abstract

There are two competing and seemingly different methodologies for calculating fair values—the direct and indirect methods. The direct approach has the advantage of providing a more reliable assessment of the risk of financial leverage. The indirect method can be structured to adjust for financial leverage, however, the methodology becomes excessively complex. The advantage of the indirect method is that it can be more easily related to exit prices. Intuitively, an exit price should reflect both the creditworthiness of the firm and the cost of capital of the firm. How are these two concepts related? This paper attempts to advance the fair valuation methodology by addressing these questions and presenting a methodology for deriving the firm or own credit risk assumption (to be used with the direct method) that is consistent with the cost of capital assumption used with the indirect method.  相似文献   

11.

We consider the classical risk model with unknown claim size distribution F and unknown Poisson arrival rate u . Given a sample of claims from F and a sample of interarrival times for these claims, we construct an estimator for the function Z ( u ), which gives the probability of non-ruin in that model for initial surplus u . We obtain strong consistency and asymptotic normality for that estimator for a large class of claim distributions F . Confidence bounds for Z ( u ) based on the bootstrap are also given and illustrated by some numerical examples.  相似文献   

12.
Abstract

Non-cancellable sickness and disability insurance—in Sweden known as long-term sickness insurance—has been carried on in Sweden since the beginning of the twentieth century; first by Eir since 1911, then by Valkyrian from 1912, and by Salus, a special company for physicians, since 1929. An account of the technical methods employed by Eir in sickness insurance is given in a paper which was read before the Ninth International Congress of Actuaries in 1930.1 In several important respects a new epoch has been established as regards sickness insurance in Sweden. On 1 January 1955 compulsory sickness insurance was introduced; and thus an essential part of the demand for sickness insurance was covered. At the same time three of the life assurance companies, Thule, Svenska Liv, and Städernas Liv have also begun to carryon the type of sickness insurance which had previously been effected only by the three companies mentioned above, and whose activities are restricted to sickness insurance. The apprehensions that might have been felt respecting the possible glut of the market were not confirmed; on the contrary, the interest in long-term sickness insurance appears to be increasing.  相似文献   

13.
Abstract

Introduction.

Consider a unit of risk, say the whole portfolio of an office, or a comprehensive contract of a branch of casualty insurance, which can give rise to a variety of total amounts of claims during a chosen period, say one year. The total claims of the years i =- 1, 2, ... will be denoted by x 1. They follow some frequency distribution and we assume that during the years considered they are independent from year to year and subject to the same parent distribution. This means, implicitly, that the volume of business and the value of money have remained unaltered and this assumption will be made, since the adjustments otherwise needed are technically trivial and we are not dealing here with the commercial aspect (dif. ficult though it may be of solution) arising out of changes in monetary value. The frequency distribution mentioned can then be regarded as given by a sample from a population whose probability distribution is given by p (x), say, so that   相似文献   

14.
The aim of this study is to present an efficient and easy framework for the application of the Least Squares Monte Carlo methodology to the pricing of gas or power facilities as detailed in Boogert and de Jong [J. Derivatives, 2008, 15, 81–91]. As mentioned in the seminal paper by Longstaff and Schwartz [Rev. Financ. Stud. 2001, 113–147], the convergence of the Least Squares Monte Carlo algorithm depends on the convergence of the optimization combined with the convergence of the pure Monte Carlo method. In the context of the energy facilities, the optimization is more complex and its convergence is of fundamental importance in particular for the computation of sensitivities and optimal dispatched quantities. To our knowledge, an extensive study of the convergence, and hence of the reliability of the algorithm, has not been performed yet, in our opinion this is because the apparent infeasibility and complexity uses a very high number of simulations. We present then an easy way to simulate random trajectories by means of diffusion bridges in contrast to Dutt and Welke [J. Derivatives, 2008, 15 (4), 29–47] that considers time-reversal Itô diffusions and subordinated processes. In particular, we show that in the case of Cox-Ingersoll-Ross and Heston models, the bridge approach has the advantage to produce exact simulations even for non-Gaussian processes, in contrast to the time-reversal approach. Our methodology permits performing a backward dynamic programming strategy based on a huge number of simulations without storing the whole simulated trajectory. Generally, in the valuation of energy facilities, one is also interested in the forward recursion. We then design backward and forward recursion algorithms such that one can produce the same random trajectories by the use of multiple independent random streams without storing data at intermediate time steps. Finally, we show the advantages of our methodology for the valuation of virtual hydro power plants and gas storages.  相似文献   

15.
Abstract

We present an approach based on matrix-analytic methods to find moments of the time of ruin in Markovian risk models. The approach is applicable when claims occur according to a Markovian arrival process (MAP) and claim sizes are phase distributed with parameters that depend on the state of the MAP. The method involves the construction of a sample-path-equivalent Markov-modulated fluid flow for the risk model. We develop an algorithm for moments of the time of ruin and prove the algorithm is convergent. Examples show that the proposed approach is computationally stable.  相似文献   

16.
Abstract

In the classical compound Poisson risk model, Lundberg's inequality provides both an upper bound for, and an approximation to, the probability of ultimate ruin. The result can be applied only when the moment generating function of the individual claim amount distribution exists. In this paper we derive an upper bound for the probability of ultimate ruin when the moment generating function of the individual claim amount distribution does not exist.  相似文献   

17.
High-order discretization schemes of SDEs using free Lie algebra-valued random variables are introduced by Kusuoka [Adv. Math. Econ., 2004, 5, 69–83], [Adv. Math. Econ., 2013, 17, 71–120], Lyons–Victoir [Proc. R. Soc. Lond. Ser. A Math. Phys. Sci., 2004, 460, 169–198], Ninomiya–Victoir [Appl. Math. Finance, 2008, 15, 107–121] and Ninomiya–Ninomiya [Finance Stochast., 2009, 13, 415–443]. These schemes are called KLNV methods. They involve solving the flows of vector fields associated with SDEs and it is usually done by numerical methods. The authors have found a special Lie algebraic structure on the vector fields in the major financial diffusion models. Using this structure, we can solve the flows associated with vector fields analytically and efficiently. Numerical examples show that our method reduces the computation time drastically.  相似文献   

18.
Abstract

In this paper we consider computational methods of finding exit probabilities for a class of multivariate diffusion processes. Although there is an abundance of results for one-dimensional diffusion processes, for multivariate processes one has to rely on approximations or simulation methods. We adopt a Large Deviations approach to approximate barrier crossing probabilities of a multivariate Brownian Bridge. We use this approach in conjunction with simulation methods to develop an efficient method of obtaining barrier crossing probabilities of a multivariate Brownian motion. Using numerical examples, we demonstrate that our method works better than other existing methods. We mainly focus on a three-dimensional process, but our framework can be extended to higher dimensions. We present two applications of the proposed method in credit risk modeling. First, we show that we can efficiently estimate the default probabilities of several correlated credit risky entities. Second, we use this method to efficiently price a credit default swap (CDS) with several correlated reference entities. In a conventional approach one normally adopts an arbitrary copula to capture dependency among counterparties. The method we propose allows us to incorporate the instantaneous variance-covariance structure of the underlying process into the CDS prices.  相似文献   

19.
Using a firm-level panel data set this paper investigates the impact of taxation on the decision of German multinationals to hold or establish a subsidiary in other European countries or abroad. Taking account of unobserved local characteristics as well as firm-specific preferences for potential locations, the results confirm significant effects of tax incentives, market size, and of labor cost on cross-border location decisions. In accordance with Devereux and Griffith (1998) we find that the marginal effective tax rate has no predictive power for location decisions. However, the results indicate a considerably weaker predictive power of the effective average tax rate as compared to the statutory tax rate. JEL Code:H25 ⋅ F23 ⋅ F21 ⋅ R38  相似文献   

20.
Abstract

In a recent paper (Hallin & Ingenbleek, 1981a), the selection procedure proposed in (Hallin, 1977) was applied to the study of the claim probability in the motor third party portfolio of an important Belgian company. The number of observations, however, did not allow for an investigation of the claim amount. The data we study here consist of the entire Swedish portfolio (more than two millions of policies). An adapted version of the selection procedure provides a good insight into the structure of the risk, hence into what the tariff structure should be 1 The rating system used for the Swedish Automobile Insurance Portfolio is based on the factor method. This paper should be looked upon as a critical discussion of the practical application of this method and of its statistical significancy.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号