首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

In a recent paper1 I have expressed a doubt as to the method followed by Markoff,2 in deriving the remainder-terms for the developments of ?(n) (x) in terms of Δ n?(x) , Δ n+1 ?(x), ... and of Δ n?(x) in terms of ?(n) (x), ?(n+1) (x), ‥ .I have since then realized that I have done Markoff something less tha,n justice, and that his line of argument, after filling up a slight gap in one of his proofs, is not only sound but easily accessible to certain generalizations which I proceed to give.  相似文献   

2.
Abstract

I provide comments on two papers, Barker and Teixeira ([2018]. Gaps in the IFRS Conceptual Framework. Accounting in Europe, 15) and Van Mourik and Katsuo ([2018]. Profit or loss in the IASB Conceptual Framework. Accounting in Europe, 15), in this issue, which were presented at the EAA-IASB research forum in Brussels. The paper accepts the shortcomings of the updated IASB conceptual framework and argues that these are in large part due to the origins of the document. It points out that the original US project was an attempt to make standard-setting more consistent and involved creating principles which would explain existing standards. Constituents have subsequently resisted attempts to make the framework theoretically sound because they fear this will encourage too much innovation. Standard-setters prefer incremental change, so continue to work with a model created to resolve a problem of the 1970s. I suggest that since standard-setting has been professionalised, the more significant need to is to define what information investors find useful. This may involve providing more granular information about the entity’s business model.  相似文献   

3.
Abstract

This paper examines a portfolio of equity-linked life insurance contracts and determines risk-minimizing hedging strategies within a discrete-time setup. As a principal example, I consider the Cox-Ross-Rubinstein model and an equity-linked pure endowment contract under which the policyholder receives max(ST , K) at time T if he or she is then alive, where ST is the value of a stock index at the term T of the contract and K is a guarantee stipulated by the contract. In contrast to most of the existing literature, I view the contracts as contingent claims in an incomplete model and discuss the problem of choosing an optimality criterion for hedging strategies. The subsequent analysis leads to a comparison of the risk (measured by the variance of the insurer’s loss) inherent in equity-linked contracts in the two situations where the insurer applies the risk-minimizing strategy and the insurer does not hedge. The paper includes numerical results that can be used to quantify the effect of hedging and describe how this effect varies with the size of the insurance portfolio and assumptions concerning the mortality.  相似文献   

4.
《Quantitative Finance》2013,13(4):282-296
Abstract

What return should you expect when you take on a given amount of risk? How should that return depend upon other people's behaviour? What principles can you use to answer these questions? In this paper, I approach these topics by exploring the consequences of two simple hypotheses about risk.

The first is a common-sense invariance principle: assets with the same perceived risk must have the same expected return. It leads directly to the well known Sharpe ratio and the classic risk-return relationships of arbitrage pricing theory and the capital asset pricing model.

The second hypothesis concerns the perception of time. I conjecture that in times of speculative excitement, short-term investors may instinctively imagine stock prices to be evolving in a time measure different from that of calendar time. They may perceive and experience the risk and return of a stock in intrinsic time, a dimensionless time scale that counts the number of trading opportunities that occur, but pays no attention to the calendar time that passes between them.

Applying the first hypothesis in the intrinsic time measure suggested by the second, I derive an alternative set of relationships between risk and return. Its most noteworthy feature is that, in the short-term, a stock's trading frequency affects its expected return. I show that short-term stock speculators will expect returns proportional to the temperature of a stock, where temperature is defined as the product of the stock's traditional volatility and the square root of its trading frequency. Furthermore, I derive a modified version of the capital asset pricing model in which a stock's excess return relative to the market is proportional to its traditional beta multiplied by the square root of its trading frequency.

I also present a model for the joint interaction of long-term calendar-time investors and short-term intrinsic-time speculators that leads to market bubbles characterized by stock prices that grow super-exponentially with time.

Finally, I show that the same short-term approach to options speculation can lead to an implied volatility skew.

I hope that this model will have some relevance to the behaviour of investors expecting inordinate returns in highly speculative markets.  相似文献   

5.
Abstract

Some years ago, in the course of an analysis of upper and lower limits for incomplete moments of statistical distributions I established an elementary summation formula1 which proved rather useful for the purpose I had in view. Subsequently the formula was generalized by professor Steffensen, who showed2 that the formula in question could be looked upon as giving the first term of an expansion in a certain type of series. Professor Steffensen established recurrence formulae for the coefficients of the series and computed the second, third and fourth term and the corresponding remainders1, but did not arrive at a general, explicite expression for the coefficient of the n-th term and the corresponding remainder. A year later I found these expressions accidentally while I was working on some other problem. I also discovered the real nature of the procedure in question which proved to be a certain kind of least square fitted polynomial approximation. I did not, however, at the time publish the result. Taking the question up again later I found that the whole problem could be considerably generalized. The type of generalization in question is analogous to the generalization from polynomials to arbitrary functions.  相似文献   

6.
Abstract

In this discussion of Brouwer and Naarding's article ‘Making Deferred Taxes Relevant’, which is published in this issue of Accounting in Europe, I question several aspects of their proposal to change the tax accounting standard. I argue that a quest for more value relevance of individual balance sheet items is not a good guideline for accounting standard setting. The distinction between book-first and tax-first temporary differences may be helpful for some analytical purposes, but it is not sufficiently robust to serve as a basis for an accounting standard. However, I agree with the authors that the efforts to improve IAS 12 should not be abandoned.  相似文献   

7.
Abstract

§ 1. Correlation Generally.

In my thesis for Doctorship of 1919, 1 This Journal, 1919, p. 1. I have made some critical remarks on the theory of correlation, trying especially to exhibit the rather unaccomplished state of this theory, and the danger of releasing its formulae for general use. In a supplementary note, 2 Ibidem, p. 204. I have pushed my criticism a little further. Since that time, more than ten years have passed away, but I find my point of view still sustainable. One reproach, however, I have never been able to reject, viz, of having been exclusively negative. In fact, I find this position rather natural, from a philosophical standpoint. The information value of correlation calculations is indeed, as a rule, very small. And this seems the more regrettable, as the importance of the Στχαστι?? Τ?χυη, even for the most difficult questions of knowledge, ought to be very great. In a paper of 1924, “Quelques questions concernant les principes de la théorie des probabilités” 3 Ibidem, 1924, p. 107. I have tried to explain, how I imagine the development of the theory of correlation in order to be more apt to set about such philosophical questions.  相似文献   

8.
Abstract

In his short article Professor Seal considers inter alia the suitability of the renewal approach to the occurrence scheme in risk theory. He gives a catalog of four facts which according to him make the renewal approach less reasonable. About the two first facts he himself says that they may be removed as more actual experience becomes available so I will not discuss them but consider the two last only  相似文献   

9.
The exploration of the mean-reversion of commodity prices is important for inventory management, inflation forecasting and contingent claim pricing. Bessembinder et al. [J. Finance, 1995, 50, 361–375] document the mean-reversion of commodity spot prices using futures term structure data; however, mean-reversion to a constant level is rejected in nearly all studies using historical spot price time series. This indicates that the spot prices revert to a stochastic long-run mean. Recognizing this, I propose a reduced-form model with the stochastic long-run mean as a separate factor. This model fits the futures dynamics better than do classical models such as the Gibson–Schwartz [J. Finance, 1990, 45, 959–976] model and the Casassus–Collin-Dufresne [J. Finance, 2005, 60, 2283–2331] model with a constant interest rate. An application for option pricing is also presented in this paper.  相似文献   

10.

The sequential approach to credibility, developed by Landsman and Makov [(1999a) On stochastic approximation and credibility. Scand. Actuarial J. 1, 15-31; (1999b) Sequential credibility evaluation for symmetric location claim distributions. Insurance: Math. Econ. 24, 291-300] is extended to the scale dispersion family, which contains distributions often used in actuarial science: log-normal, Weibull, Half normal, Stable, Pareto, to mention only a few. For members of this family a sequential quasi-credibility formula is devised, which can also be used for heavy tailed claims. The results are illustrated by a study of log-normal claims.  相似文献   

11.
Abstract

In day-to-day life, we are continuously exposed to different kinds of risk. Unfortunately, avoiding risk can often come at societal or individual costs. Hence, an important task within risk management is deciding how much it can be justified to expose members of society to risk x in order to avoid societal and individual costs y – and vice versa. We can refer to this as the task of setting an acceptable risk threshold. Judging whether a risk threshold is justified requires normative reasoning about what levels of risk exposure that are permissible. One such prominent normative theory is utilitarianism. According to utilitarians, the preferred risk threshold is the one that yields more utility for the most people compared to alternative risk thresholds. In this paper, I investigate whether and the extent to which utilitarian theory can be used to normatively ground a particular risk threshold in this way. In particular, I argue that there are (at least) seven different utilitarian approaches to setting an acceptable risk threshold. I discuss each of these approaches in turn and argue that neither can satisfactorily ground an acceptable risk threshold.  相似文献   

12.
Abstract

The problem of expressing a difference of a given order of a function in terms of successive derivatives of the function and the related problem of obtaining a manageable form of the remainder- term of a special expansion of this kind have on several occasions been treated in the literature. One of the best known results of investigations on this subject is Markoff's formula,I which may be written in a slightly modified form: where Δm0µ = [Δmxµ ]x = 0 and Δm h denotes the descending difference of order m for a table-interval of length h.  相似文献   

13.
Abstract

1. In an earlier Note1 I have suggested to measure the dependence between statistical variables by the expression where pij is the probability that x assumes the value xi and y the value yj , while By is meant summation with respect to all i and j for which pij > pi* p*j .  相似文献   

14.
In my Paper entitled ‘Contribution Formula, Integral Methods, aud Risk Theory’, which appeared in this Journal as the first pages of the volume 1932, I discussed the Floating Bonus method, and in an Additional Note, page 43, I touched the question of priority. I have afterwards have the honour of receiving from an illustrious colleague, Mr. Elderton, a letter concerning this question, in which he says: —

I have been very interested in reading your paper in the new number of the Skandinavisk Aktuarietidskrift and I wondered if you are aware that the floating bonus system was in use many years ago in an office called the Mutual Life Assurance Society (of London) and that a paper referring to the bonus system was written by H. W. Manly in J. I. A., Vol. XXIII, page 233, in April 1882. Somewhat similar arrangements were made by Industrial Insurance Companies in England when they began to give bonuses to their policies — such bonuses being strictly given as an act of grace. The policies themselves were written as non-participating assurances. I thought as a matter of history you might be interested to have this information and if you can find in Sweden a copy of the J. I. A. you will find some remarks on the discussion of the history of the subject by A. H. Bailey who was then President of the Institute of Actuaries.

Yours sincerely

W. Palin Elderton.  相似文献   

15.
Abstract

It is well known, that Charlier has suggested to develop a frequency-function f(x) in a series of the form where ?(x) stands for a particular, given frequency-function, while the symbol ? denotes the ascending difference, that is ??(x)=?(x)-?(x-1). In the form proposed by Charlier this method is open to objections of which he is partly himself aware; the chief objecti.on being, that no account is taken of questions of convergence. It seems, therefore, of interest to examine what becomes of the method, if it is not carried beyond legitimate bounds. In doing so, I shall try to simplify the determination of the constants, a problem which has been attacked by Charlier 1 C. V. L. Charlier: Über die Darstellung willkürlicher Funktionen (Arkiv för matematik, astronomi och fysik, Bd 2, N:r 20). himself, and by N. R. Jørgensen 2 N. R. Jørgensen: Note sur la fonction de répartition de type B de M. Charlier (ibid., Bd 10, N:r 15); Undersøgelser over Frequensflader og Korrelation (Copenhagen, 1916), S. 5–15. in a special case. For this purpose I avail myself of a class of symmetrical functions of the observations for which I have proposed the name of “factorial moments” and the systematical use of which I recommended in my paper “Factorial Moments and Discontinuous Frequency-Functions”. 3 This Journal, 1923, p. 73. See also the author's book “Matematisk Iagttagelseslære” (Copenhagen, 1923), I, § 2. I shall assume, that the reader is familiar with the notation employed in that paper which differs in some respects from the usual notation of moments.  相似文献   

16.
Abstract

The way in which the primes are distributed among the integers have attracted the curiosity of mathematicians and even actuaries as long as the problems have been recognised. It has been shown that there are infinitely many primes, although their relative frequency among the integers in the neighbourhood of x is decreasing with increasing x. We know that there will be about y/log x primes in the interval from x to x + y for y?x. Precisely, the relative frequency of primes will be 1/log x in the neighbourhood of x. This was conjectured by Gauss, but is an established fact since 1896.  相似文献   

17.
Abstract

Introductory. In the theory of random processes we may distinguish between ordinary processes and point processes. The former are concerned with a quantity, say x (t), which varies with time t, the latter with events, incidences, which may be represented as points along the time axis. For both categories, the stationary process is of great importance, i. e., the special case in which the probability structure is independent of absolute time. Several examples of stationary processes of the ordinary type have been examined in detail (see e. g. H. Wold 1). The literature on stationary point processes, on the other hand, has exclusively been concerned with the two simplest cases, viz. the Poisson process and the slightly more general process arising in renewal theory (see e. g. J. Doob 3).  相似文献   

18.
Many empirical studies have shown that financial asset returns do not always exhibit Gaussian distributions, for example hedge fund returns. The introduction of the family of Johnson distributions allows a better fit to empirical financial data. Additionally, this class can be extended to a quite general family of distributions by considering all possible regular transformations of the standard Gaussian distribution. In this framework, we consider the portfolio optimal positioning problem, which has been first addressed by Brennan and Solanki [J. Financial Quant. Anal., 1981, 16, 279–300], Leland [J. Finance, 1980, 35, 581–594] and further developed by Carr and Madan [Quant. Finance, 2001, 1, 9–37] and Prigent [Generalized option based portfolio insurance. Working Paper, THEMA, University of Cergy-Pontoise, 2006]. As a by-product, we introduce the notion of Johnson stochastic processes. We determine and analyse the optimal portfolio for log return having Johnson distributions. The solution is characterized for arbitrary utility functions and illustrated in particular for a CRRA utility. Our findings show how the profiles of financial structured products must be selected when taking account of non Gaussian log-returns.  相似文献   

19.
Abstract

I present a summary and analysis of a series of papers from this special issue of Accounting in Europe that examine the role and current status of International Financial Reporting Standard (IFRS) in the completion of National Accounting Rules applicable to large ‘non-listed in a regulated market’ non-financial undertakings trading for gain in 25 European countries following the recent implementation of the new European Accounting Directive 2013/34/EU. IFRS has had a varying degree of influence across European countries. Some refer and are closely aligned to IFRS or to IFRS for small and medium-sized entities, some while influenced by IFRS retain complete independence and some show limited influence mostly when accounts are for other purposes such as taxation, dividend distribution or creditor protection. I present a number of classification schemes and contrast these with Nobes [(2008). Accounting classification in the IFRS Era. Australian Accounting Review, 18(3), 191–198] two group accounting classification of European accounting systems as strong equity/commercially driven versus weak equity/government driven/tax-dominated systems.  相似文献   

20.
Motivated by the practical challenge in monitoring the performance of a large number of algorithmic trading orders, this paper provides a methodology that leads to automatic discovery of causes that lie behind poor trading performance. It also gives theoretical foundations to a generic framework for real-time trading analysis. The common acronym for investigating the causes of bad and good performance of trading is transaction cost analysis Rosenthal [Performance Metrics for Algorithmic Traders, 2009]). Automated algorithms take care of most of the traded flows on electronic markets (more than 70% in the US, 45% in Europe and 35% in Japan in 2012). Academic literature provides different ways to formalize these algorithms and show how optimal they can be from a mean-variance (like in Almgren and Chriss [J. Risk, 2000, 3(2), 5–39]), a stochastic control (e.g. Guéant et al. [Math. Financ. Econ., 2013, 7(4), 477–507]), an impulse control (see Bouchard et al. [SIAM J. Financ. Math., 2011, 2(1), 404–438]) or a statistical learning (as used in Laruelle et al. [Math. Financ. Econ., 2013, 7(3), 359–403]) viewpoint. This paper is agnostic about the way the algorithm has been built and provides a theoretical formalism to identify in real-time the market conditions that influenced its efficiency or inefficiency. For a given set of characteristics describing the market context, selected by a practitioner, we first show how a set of additional derived explanatory factors, called anomaly detectors, can be created for each market order (following for instance Cristianini and Shawe-Taylor [An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, 2000]). We then will present an online methodology to quantify how this extended set of factors, at any given time, predicts (i.e. have influence, in the sense of predictive power or information defined in Basseville and Nikiforov [Detection of Abrupt Changes: Theory and Application, 1993], Shannon [Bell Syst. Tech. J., 1948, 27, 379–423] and Alkoot and Kittler [Pattern Recogn. Lett., 1999, 20(11), 1361–1369]) which of the orders are underperforming while calculating the predictive power of this explanatory factor set. Armed with this information, which we call influence analysis, we intend to empower the order monitoring user to take appropriate action on any affected orders by re-calibrating the trading algorithms working the order through new parameters, pausing their execution or taking over more direct trading control. Also we intend that use of this method can be taken advantage of to automatically adjust their trading action in the post trade analysis of algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号