首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 225 毫秒
1.
Abstract

The investigation of mortality variations has asttracted a great deal of interest in recent years. It is well known that, generally speaking, mortality rates have been continually decreasing during a long period. The detailed study of available experience concerning the structure of this movement, and the analysis of its relations to various factors of social and economic development and of its probable causes, obviously constitute a statistical problem of the highest importance. In the trend of mortality, and the deviations caused by temporary influences, essential features of the history of the population concerned are revealed.  相似文献   

2.
Abstract

Several different statistics have been proposed for testing the independence between successive observations from a normal population. In order to choose between the various tests a theory of testing this hypothesis in certain populations is needed. In this paper the problem is studied within the framework of the Neyman-Pearson theory. Certain theorems concerning more general problems of quadratic forms are developed and later applied to the question of testing serial correlation.  相似文献   

3.
4.
Abstract

This paper considers situations where a known number of the smallest values of a sample and a known number of the largest values have been truncated. The problem is to obtain an estimate of the population mean, an estimate of the standard deviation of this estimate of the mean, and an estimate of the population standard deviation. This paper derives a nonparametric estimate for each of these three cases. These estimates are approximately valid for most continuous statistical populations of practical interest when a small number of sample values are truncated and the sample size is not too small. The mean estimate consists of a linear function of the ordered values of the truncated sample, while each standard deviation estimate is the square root of a quadratic function of these observations.  相似文献   

5.
Abstract

During the last few years quite a number of investigations – private as well as official– have been devoted to the study, by numerical methods, of the effect of a given mortality, and fertility (or nativity) on population growth. Considerable progress has also been made – above all by the researches of A. J. LOTKA – with the abstract mathematical treatment of this problem which leads to certain integral equations with interesting asymptotic properties. Until quite recently, however, no account has been taken of the effect of different marriage rates, or rates of nuptiality. For the sake of simplicity the fertility has, in fact, been assumed to have a certain value for each age-group of women as a whole, without any distinction whatever between married and unmarried women. This is of course only a first approach to the more general and more important problem of studying the combined effect of a given mortality, a given nuptiality, and a given fertility of married as well as of unmarried women.  相似文献   

6.
Abstract

Consider a set of observations which are obtained by truncating a sample of known size. The truncation procedure consists of deleting a known number of the largest sample values and a known number of the smallest sample values. One problem considered is the use of this data to estimate certain of the population percentage points for which the corresponding sample data was deleted. Another problem is to estimate the population mean and standard deviation. This paper presents solutions to these problems which are valid for a rather general class of continuous statistical populations. The results obtained should be applicable to most practical cases of a continuous type. A sample analog of the percentage point estimation procedure has interesting uses for life testing situations. Namely, the first time at which a specified number of additional items of a sample will have failed can be predicted from the values of the items which have already failed.  相似文献   

7.
Bond rating Transition Probability Matrices (TPMs) are built over a one-year time-frame and for many practical purposes, like the assessment of risk in portfolios or the computation of banking Capital Requirements (e.g. the new IFRS 9 regulation), one needs to compute the TPM and probabilities of default over a smaller time interval. In the context of continuous time Markov chains (CTMC) several deterministic and statistical algorithms have been proposed to estimate the generator matrix. We focus on the Expectation-Maximization (EM) algorithm by Bladt and Sorensen. [J. R. Stat. Soc. Ser. B (Stat. Method.), 2005, 67, 395–410] for a CTMC with an absorbing state for such estimation. This work’s contribution is threefold. Firstly, we provide directly computable closed form expressions for quantities appearing in the EM algorithm and associated information matrix, allowing to easy approximation of confidence intervals. Previously, these quantities had to be estimated numerically and considerable computational speedups have been gained. Secondly, we prove convergence to a single set of parameters under very weak conditions (for the TPM problem). Finally, we provide a numerical benchmark of our results against other known algorithms, in particular, on several problems related to credit risk. The EM algorithm we propose, padded with the new formulas (and error criteria), outperforms other known algorithms in several metrics, in particular, with much less overestimation of probabilities of default in higher ratings than other statistical algorithms.  相似文献   

8.
Abstract

In this paper, we examine case studies from three different areas of insurance practice: health care, workers’ compensation, and group term life. These different case studies illustrate how the broad class of panel data models can be applied to different functional areas and to data that have different features. Panel data, also known as longitudinal data, models are regression-type models that have been developed extensively in the biological and economic sciences. The data features that we discuss include heteroscedasticity, random and fixed effect covariates, outliers, serial correlation, and limited dependent variable bias. We demonstrate the process of identifying these features using graphical and numerical diagnostic tools from standard statistical software.

Our motivation for examining these cases comes from credibility rate making, a technique for pricing certain types of health care, property and casualty, workers’ compensation, and group life coverages. It has been a part of actuarial practice since Mowbray’s (1914) fundamental contribution. In earlier work, we showed how many types of credibility models could be expressed as special cases of panel data models. This paper exploits this link by using tools developed in connection with panel data models for credibility rate-making purposes. In particular, special routines written for credibility rate-making purposes are not required.  相似文献   

9.
Abstract

The conventional applications of the theory of risk concern many important sides of the insurance business, e.g. evaluating the stability, estimating a suitable level for maximum net retention, safety loading or the funds. Whether or not the applications have been useful for practical management may have depend very much on how the risk theoretical treatments have been linked with the complexity of various other aspects involved with the actual decision making. Quite obviously the difficulties in this respect have been considerable, probably sometimes quite overwhelming, according to opinions sometimes expressed that the risk theory is lacking in any practical value. Our purpose is to attack just this problem and to endeavour to build up a picture of the management process of the insurance business in its entirety (as far as possible) and to place the risk theoretical aspects in it as a part among numerous other parts, most of which are not of an actuarial character. In this way some of the classical applications of risk theory are amalgamated with the ideas of modern business planning, especially with the technics of long-range prognoses on the basis of different, often alternative preassumptions or, as it is often called, different business strategies. The main ideas are outlined in the study book of risk theory by Beard-Pentikainen-Pesonen (1969), chapter 13.  相似文献   

10.
Abstract

Extract

The importance of the problem of the optimum choice of stratification points, when conducting stratified sampling on a univariate population, has been pointed out by Dalenius (1950, 1957) In a simplified form, this problem may be given the following mathematical formulation.  相似文献   

11.
Abstract

1. Introduction and Summary

The spectral analysis plays an important rôle in the study of stationary stochastic process. It cannot, however, always, be assumed that the nature of the corresponding spectral function is known a priori—we are then faced with two problems. In the first we may have either a discrete spectrum plus a uniform noise or a continuous spectrum and in the second we may have both at the same time. A possible method has been suggested by Whittle [5] as a solution to the second problem. A discriminatory test based on the likelihood ratio has been put forward by Bartlett as a solution to the first problem which is an important one occurring in practice. The test procedure was applied to two suitable artificial series. The test, when applied, to a series with a harmonic element resulted in the failure to arrive atadecision. An investigation was then made on the applicability of this test to such series in general.  相似文献   

12.
Abstract

In recent decades, as the use of derivatives by financial institutions has expanded, the shortcomings of historical cost accounting approaches have become increasingly apparent. Since derivatives can create large exposures to risk that go unnoticed under historical standards, the accounting industry has focused on how to change the standards so that these risks are reflected appropriately in a company’s accounting statements. New standards such as SFAS 115 and SFAS 133 have been adopted in part to achieve this goal. However, both of these standards use a piecemeal approach to risk measurement that may be adding to the problem rather than creating a solution. This paper will use a simple equity-indexed annuity to illustrate the problem with historical cost accounting and with the standards that have been adopted to correct it. The paper then argues that the only legitimate means of reflecting risk properly on a company’s accounting statements is to adopt full fair value accounting for all assets and liabilities on the company’s books.  相似文献   

13.
Summary

Given a convex set F in the plane with a sufficiently smooth boundary we try to approximate it by polygons in the following way. Using some specified sampling procedure we pick out n points on the boundary. Through each such point we draw the tangent. Consider the polygon F*n spanned by all these tangents. If n is large we would expect F*n to be close to F. Measuring the deviation by the area of F*n F we will derive an asymptotic expression for this area when n becomes large. This expression can be used to choose the optimum sampling procedure in the sense of smallest asymptotic deviation.

The problem arose from a problem of statistical approximation in propositional calculus, see section 1.  相似文献   

14.
Abstract

This paper presents a new multivariate graduation method based upon a constrained information theoretic methodology. The statistical formulation of the graduation problem is shown to result in a convex mathematical programming problem thus allowing the actuary to include constraints such as monotonicity along the rows, monotonicity down the column, monotonicity along the upward diagonal, as well as convexity, and any of a variety of other relationships desired for the graduated series. An illustrative application is made to the graduation of select and ultimate mortality tables.  相似文献   

15.
Abstract

It is basic actuarial knowledge that the pure premium of an insurance contract can be written as the product of the expected claim number and the expected claim amount. Actuaries use credibility theory to incorporate the contract’s individual experience into this calculation in a statistically optimal way. For many years, however, the use of credibility was limited to the frequency component. Starting with the paper by Hewitt (1971), there have been various suggestions as to how credibility theory also can be applied to the severity component of the pure premium. The latest such suggestion, Frees (2003), revived the interest in the problem.

In this paper, we review four different formulas incorporating frequency and severity into credibility calculations. We then compare by simulation which one is most accurate at predicting a contract’s next-year outcome. It is found that the classical formula of Bühlmann (1967) is as good as the other ones in many cases. Alternatives, however, may offer easier analysis of the separate effects of frequency and severity on the premium.

We also show that all the formulas reviewed in this paper stem from the same minimization problem, and we present a general, integrated, solution. At the same time, we complete Gerber (1972) by providing a proof to the main result of this paper and by stating required additional assumptions.  相似文献   

16.
《公共资金与管理》2013,33(3):58-68

We are indebted to Alastair Gray, of the Open University, for much of the statistical analysis on which this article is based.  相似文献   

17.
Abstract

In this paper I show how methods that have been applied to derive results for the classical risk process can be adapted to derive results for a class of risk processes in which claims occur as a renewal process. In particular, claims occur as an Erlang process. I consider the problem of finding the survival probability for such risk processes and then derive expressions for the probability and severity of ruin and for the probability of absorption by an upper barrier. Finally, I apply these results to consider the problem of finding the distribution of the maximum deficit during the period from ruin to recovery to surplus level 0.  相似文献   

18.
Abstract

Interest in ballot theorems arose in 1887, not as problem in the theory of probability, but as a mathematical puzzle. Little was it realized at that time that, in the subsequent analysis of this problem, ballot theorems would have many interesting applications in the theory of probability.  相似文献   

19.
Abstract

In this article we investigate three related investment-consumption problems for a risk-averse investor: (1) an investment-only problem that involves utility from only terminal wealth, (2) an investment-consumption problem that involves utility from only consumption, and (3) an extended investment-consumption problem that involves utility from both consumption and terminal wealth. Although these problems have been studied quite extensively in continuous-time frameworks, we focus on discrete time. Our contributions are (1) to model these investmentconsumption problems using a discrete model that incorporates the environment risk and mortality risk, in addition to the market risk that is typically considered, and (2) to derive explicit expressions of the optimal investment-consumption strategies to these modeled problems. Furthermore, economic implications of our results are presented. It is reassuring that many of our findings are consistent with the well-known results from the continuous-time models, even though our models have the additional features of modeling the environment uncertainty and the uncertain exit time.  相似文献   

20.
Abstract

The Whittaker method of graduation has been known and used for a long time and has remained popular due to its possession of a number of ideal properties. They include being nonparametric and having an easy to understand foundation. The latter means that it makes sense and thus the user of the method has a good idea of what it can and cannot do. As well, there is a statistical derivation available that uses Bayesian notions. A problem with the derivation is that it is more intuitive than precise and as such does not provide a useful frame of reference for the graduator. Regardless of the point of view, the graduation cannot be completed until the smoothing parameter is selected and this has always relied on the judgment of the analyst.

In this paper, three tasks will be undertaken. The first is to replace the ad-hoc Bayesian derivation of the method with a formal Bayesian specification. The second is to show that with this specification it is possible to complete the graduation without making an arbitrary selection of the smoothing parameter. The third is to provide a Monte Carlo Bayesian approach for the incorporation of constraints in the graduated values. The ideas will be illustrated with a numerical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号