首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

This paper considers situations where a known number of the smallest values of a sample and a known number of the largest values have been truncated. The problem is to obtain an estimate of the population mean, an estimate of the standard deviation of this estimate of the mean, and an estimate of the population standard deviation. This paper derives a nonparametric estimate for each of these three cases. These estimates are approximately valid for most continuous statistical populations of practical interest when a small number of sample values are truncated and the sample size is not too small. The mean estimate consists of a linear function of the ordered values of the truncated sample, while each standard deviation estimate is the square root of a quadratic function of these observations.  相似文献   

2.
Abstract

For any continuous univariate population with finite variance there is a mathematical relation which expresses the variate-value z as a convergent series of Legendre polynomials in (2F—1), where FF(z) is the distribution function of the population, and the coefficients in this series are the expectations of homogeneous linear forms in the order statistics of random samples from the population. The relation is well adapted for estimating the median and other percentile points when nothing more is known about the population, but a random sample from it is available. The variances of these estimates can be estimated from the data. A somewhat similar relation which expresses z as a series of Chebyshev polynomials is also discussed briefly. Finally a modification of the Legendre polynomial relation enables prior knowledge of a finite extremity of the population range to be used, and a numerical illustration is given.  相似文献   

3.
Abstract

The problem of dividing the frequency function of the Weibull distribution into L(L = 1, ... ,6) strata for the purpose of estimating the population mean under optimum allocation from a stratified random sample is considered. The optimum points of stratification (y 1,..., yL-1 ) determining the minimum variance of the estimator are obtained. The variance of the sampling units in each stratum and the variance of the estimate are also given.  相似文献   

4.
Abstract

1. When the frequency function of a statistical variable is known, one of the most important tasks of Theoretical Statistics is to find the frequency functions of some simple functions of this variable. The most important are the first and second order moments in a sample containing a certain number of values of the variable.  相似文献   

5.
Abstract

The class of formulas considered in this paper belong to a type where the required integral is approximately represented by a linear function of a certain number of equidistant values of the integrand. Formulas of this type may, for instance, be obtained by integrating Lagrange's interpolation formula between finite limits, as is well known. As regards the function to be integrated we assume only, that it possesses, throughout the interval of integration, a continuous differential coefficient of the highest order of which use is made in deriving the particular formula under consideration. It is not necessary, then, that the function should possess differential coefficients of every order, much less, that it should be a polynomial.  相似文献   

6.
Abstract

1. Introduction and Summary

The spectral analysis plays an important rôle in the study of stationary stochastic process. It cannot, however, always, be assumed that the nature of the corresponding spectral function is known a priori—we are then faced with two problems. In the first we may have either a discrete spectrum plus a uniform noise or a continuous spectrum and in the second we may have both at the same time. A possible method has been suggested by Whittle [5] as a solution to the second problem. A discriminatory test based on the likelihood ratio has been put forward by Bartlett as a solution to the first problem which is an important one occurring in practice. The test procedure was applied to two suitable artificial series. The test, when applied, to a series with a harmonic element resulted in the failure to arrive atadecision. An investigation was then made on the applicability of this test to such series in general.  相似文献   

7.
Abstract

Market values of the invested assets are frequently published. For most insurance liabilities, there are no published market values and, therefore, these have to be constructed. This construction can be based on a best estimate and a price for the risks in the liabilities. This paper presents a model explaining how the best estimate and the price of mortality risk can be constructed. Several methods to describe the risks are already known. The purpose of this paper is to describe a method to determine the mortality risk in a practical way.  相似文献   

8.
Abstract

This paper applies a model of Alzheimer’s disease (AD) developed by Macdonald and Pritchard (2000) to the question of the potential for adverse selection in long-term care (LTC) insurance introduced by the existence of DNA tests for variants of the ApoE gene, the ε4 allele of which is known to predispose one to earlier onset of AD. It computes the expected present values (EPVs) of model LTC benefits with respect to AD for each of five ApoE genotypes, weighted average EPVs with and without adverse selection, and sample underwriting ratings. The paper concludes that adverse selection could increase costs significantly in a small LTC insurance market only if current population genetic risk is not much smaller than that observed in case-based studies, and if carriers of the ε4 allele are very much more likely to buy LTC insurance. Finally, the paper considers the cost of a combined retirement package, providing both pension and LTC insurance, and shows that it can reduce adverse selection.  相似文献   

9.
This paper proposes an innovative econometric approach for the computation of 24-h realized volatilities across stock markets in Europe and the US. In particular, we deal with the problem of non-synchronous trading hours and intermittent high-frequency data during overnight non-trading periods. Using high-frequency data for the Euro Stoxx 50 and the S&P 500 Index between 2003 and 2011, we combine squared overnight returns and realized daytime variances to obtain synchronous 24-h realized volatilities for both markets. Specifically, we use a piece-wise weighting procedure for daytime and overnight information to take into account structural breaks in the relation between the two. To demonstrate the new possibilities that our approach opens up, we use the new 24-h volatilities to estimate a bivariate extension of Corsi et al.’s [Econom. Rev., 2008, 27(1–3), 46–78] HAR-GARCH model. The results suggest that the contemporaneous transatlantic volatility interdependence is remarkably stable over the sample period.  相似文献   

10.
This study investigates the effect of sample size and population distribution on the bootstrap estimated sampling distributions for stochastic dominance (SD) test statistics. Bootstrap critical values for Whitmore's (1978) second- and third-degree stochastic dominance test statistics are found to vary with both data sample size and variance of the population distribution. The results indicate the parametric nature of the statistics and suggest that the bootstrap method should be used to estimate a sampling distribution each time a new data sample is drawn. As an application of the bootstrap method, the January small firm effect is examined. The results conflict with the SD results of others, and indicate that not all investors would prefer to hold just a portfolio of small capitalization firms in January.  相似文献   

11.
Abstract

During the last few years quite a number of investigations – private as well as official– have been devoted to the study, by numerical methods, of the effect of a given mortality, and fertility (or nativity) on population growth. Considerable progress has also been made – above all by the researches of A. J. LOTKA – with the abstract mathematical treatment of this problem which leads to certain integral equations with interesting asymptotic properties. Until quite recently, however, no account has been taken of the effect of different marriage rates, or rates of nuptiality. For the sake of simplicity the fertility has, in fact, been assumed to have a certain value for each age-group of women as a whole, without any distinction whatever between married and unmarried women. This is of course only a first approach to the more general and more important problem of studying the combined effect of a given mortality, a given nuptiality, and a given fertility of married as well as of unmarried women.  相似文献   

12.
V. E. Gamborg     
Abstract

A glance at the numerous papers dealing with the influence of the rate of interest on the value of premiums will show that most authors aim at computing annuity values for a new rate of interest without first re-calculating the commutation columns, It is only in exceptional cases that they derive premiums or policy-values directly, i.e. without first finding annuity values.2 Apart from the fact, that both premiums and policyvalues are implicitly given by a set of annuity values, the reason for the usual procedure lies in the type of calculations which is contemplated, since they cannot conveniently be applied to such ratios as premiums and policy-values. In the following lines we show how the method developed by A. J. Lotka 3 for the calculation of the rate of increase of a stable population is capable of generalisation and of successful application to our problem; thus the detour via the calculation of annuity values can be avoided.  相似文献   

13.
Abstract

In developing the theory of random sampling, no pressupposition is usually made as to the underlying limited population. This tends to render the theoretical work considerably more difficult; but then, in order to utilize the developed properties in tests, it is necessary to make certain limiting references to the theorem of limiting value, etc. It is, therefore, tempting to make requirements of distribution properties of the original population to avoid theoretical difficulties, and later try to expand the results obtained. The obvious course, then, is to seek the distribution of the estimate resulting from the random sample, on condition that the original limited population itself belongs to a distribution as the result of some process or other.  相似文献   

14.
The aim of our work is to propose a natural framework to account for all the empirically known properties of the multivariate distribution of stock returns. We define and study a ‘nested factor model’, where the linear factors part is standard, but where the log-volatility of the linear factors and of the residuals are themselves endowed with a factor structure and residuals. We propose a calibration procedure to estimate these log-vol factors and the residuals. We find that whereas the number of relevant linear factors is relatively large (10 or more), only two or three log-vol factors emerge in our analysis of the data. In fact, a minimal model where only one log-vol factor is considered is already very satisfactory, as it accurately reproduces the properties of bivariate copulas, in particular, the dependence of the medial point on the linear correlation coefficient, as reported in Chicheportiche and Bouchaud [Int. J. Theor. Appl. Finance, 2012, 15]. We have tested the ability of the model to predict out-of-sample the risk of non-linear portfolios, and found that it performs significantly better than other schemes.  相似文献   

15.
Abstract

This paper explores the profitability of momentum strategies, by investigating if a momentum strategy is superior to a benchmark model once the effects of data-snooping have been accounted for. Two data sets are considered. The first set of data consists of US stocks and the second one consists of Swedish stocks. For the US data strong evidence is found of a momentum effect and hence the hypothesis of weak market efficiency is rejected. Splitting the sample in two parts, it is found that the overall significance is driven by events in the earlier part of the sample. The results for the Swedish data indicate that momentum strategies based on individual stocks generate significant profits. A very weak or no momentum effect can be found when stocks are sorted into portfolios. Finally, and perhaps most importantly, results show that data-snooping bias can be very substantial. Neglecting the problem would lead to very different conclusions.  相似文献   

16.
ABSTRACT

In this article, we attempt to estimate whether firm-specific exchange rate exposures affected by hedging activities can be improved through financial regulation or supervision. To analyze this, we compose three-step estimations by using a sample of KOSPI 200 firms during 1,803 trading days between 2005 and 2012. We first estimate the relationship between exchange rate exposure and hedging activities and see whether financial regulation had any effect on hedging activities. Furthermore, using TSLS analysis, we estimate the effect of hedging activities on exchange rate exposure, which is caused by tightened financial regulation in the form of corporate governance. We report the following findings. First, firms are less likely to be exposed to exchange risk with more hedging activities. Second, corporate governance has a strongly positive effect on the hedging activities. Firms use more hedging tools when they have a strong structure of shareholder’s protection, clear outside ownership, and a better monitoring system; but the relationship becomes weaker in times of crisis.  相似文献   

17.
Abstract

In many cases one may be interested of the extreme values in a sample. An example of this will arise when one of the values in the sample shows a deviation from the rest, which can be considered large, compared to the differences of the latter values from each other. A question then naturally presents itself, whether such an outlying observation may be considered as “defect” in any sense and thus omitted from the sample. This is a very difficult question, either from the mathematical or the philosophical point of view. In the sequel this question will not be touched at all. However, the author has studied a much more restricted problem, which perhaps will contribute to throw some light also upon the former question.  相似文献   

18.
Abstract

Current formulas in credibility theory often estimate expected claims as a function of the sample mean of the experience claims of a policyholder. An actuary may wish to estimate future claims as a function of some statistic other than the sample arithmetic mean of claims, such as the sample geometric mean. This can be suggested to the actuary through the exercise of regressing claims on the geometric mean of prior claims. It can also be suggested through a particular probabilistic model of claims, such as a model that assumes a lognormal conditional distribution. In the first case, the actuary may lean towards using a linear function of the geometric mean, depending on the results of the data analysis. On the other hand, through a probabilistic model, the actuary may want to use the most accurate estimator of future claims, as measured by squared-error loss. However, this estimator might not be linear.

In this paper, I provide a method for balancing the conflicting goals of linearity and accuracy. The credibility estimator proposed minimizes the expectation of a linear combination of a squared-error term and a second-derivative term. The squared-error term measures the accuracy of the estimator, while the second-derivative term constrains the estimator to be close to linear. I consider only those families of distributions with a one-dimensional sufficient statistic and estimators that are functions of that sufficient statistic or of the sample mean. Claim estimators are evaluated by comparing their conditional mean squared errors. In general, functions of the sufficient statistics prove to be better credibility estimators than functions of the sample mean.  相似文献   

19.
Abstract

The conventional approach to evolutionary credibility theory assumes a linear state-space model for the longitudinal claims data so that Kalman filters can be used to estimate the claims’ expected values, which are assumed to form an autoregressive time series. We propose a class of linear mixed models as an alternative to linear state-space models for evolutionary credibility and show that the predictive performance is comparable to that of the Kalman filter when the claims are generated by a linear state-space model. More importantly, this approach can be readily extended to generalized linear mixed models for the longitudinal claims data. We illustrate its applications by addressing the “excess zeros” issue that a substantial fraction of policies does not have claims at various times in the period under consideration.  相似文献   

20.
Brockman and Turtle [J. Finan. Econ., 2003, 67, 511–529] develop a barrier option framework to show that default barriers are significantly positive. Most implied barriers are typically larger than the book value of corporate liabilities. We show theoretically and empirically that this result is biased due to the approximation of the market value of corporate assets by the sum of the market value of equity and the book value of liabilities. This approximation leads to a significant overestimation of the default barrier. To eliminate this bias, we propose a maximum likelihood (ML) estimation approach to estimate the asset values, asset volatilities, and default barriers. The proposed framework is applied to empirically examine the default barriers of a large sample of industrial firms. This paper documents that default barriers are positive, but not very significant. In our sample, most of the estimated barriers are lower than the book values of corporate liabilities. In addition to the problem with the default barriers, we find significant biases on the estimation of the asset value and the asset volatility of Brockman and Turtle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号