首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

Many of the contagious distributions considered in the biological sciences are members of the generalized Poisson family. Four distributions which belong to this family and have been used frequently are the Negative Binomial (cf. Bliss [2]), Neyman Type A (cf. Beall and Rescia [1]), Poisson Binomial (cf. McGuire et al. [10]) and the generalized Polya-Aeppli (cf. Skellam [14]).  相似文献   

2.
This paper examines the use of random matrix theory as it has been applied to model large financial datasets, especially for the purpose of estimating the bias inherent in Mean-Variance portfolio allocation when a sample covariance matrix is substituted for the true underlying covariance. Such problems were observed and modeled in the seminal work of Laloux et al. [Noise dressing of financial correlation matrices. Phys. Rev. Lett., 1999, 83, 1467] and rigorously proved by Bai et al. [Enhancement of the applicability of Markowitz's portfolio optimization by utilizing random matrix theory. Math. Finance, 2009, 19, 639–667] under minimal assumptions. If the returns on assets to be held in the portfolio are assumed independent and stationary, then these results are universal in that they do not depend on the precise distribution of returns. This universality has been somewhat misrepresented in the literature, however, as asymptotic results require that an arbitrarily long time horizon be available before such predictions necessarily become accurate. In order to reconcile these models with the highly non-Gaussian returns observed in real financial data, a new ensemble of random rectangular matrices is introduced, modeled on the observations of independent Lévy processes over a fixed time horizon.  相似文献   

3.
1. Introduction.

The two cases where a normal distribution is “truncated” at a known point have been treated by R. A. Fisher (1) and W. L. Stevens (2), respectively. Fisher treated the case in which all record is omitted of observations below a given value, while Stevens treated the case in which the frequency of observations below a given value is recorded but the individual values of these observations are not specified. In both cases the distribution is usually termed truncated. In the first case, admittedly, the observations form a random sample drawn from an incomplete normal distribution, but in the second case we sample from a complete normal distribution in which the obtainable information in a sense has been censored, either by nature or by ourselves. To distinguish between the two cases the distributions will be called truncated and censored 1 The term “censored” was suggested to me by Mr J. E. Kerrich. , respectively. The term “point of truncation” will be used for both.  相似文献   

4.
I examine optimal incentives and performance measurement in a model where an agent has specific knowledge (in the sense of Jensen and Meckling) about the consequences of his actions for the principal. Contracts can be based both on “input” measures related to the agent's actions and an “output” measure related to the principal's payoff. Whereas input‐based pay minimizes income risk, only output‐based pay encourages the agent to use his knowledge efficiently. In general, it is optimal to use both kinds of performance measures. The results help to explain some empirical puzzles and lead to several new predictions.  相似文献   

5.
Summary

An estimator which is a linear function of the observations and which minimises the expected square error within the class of linear estimators is called an “optimal linear” estimator. Such an estimator may also be regarded as a “linear Bayes” estimator in the spirit of Hartigan (1969). Optimal linear estimators of the unknown mean of a given data distribution have been described by various authors; corresponding “linear empirical Bayes” estimators have also been developed.

The present paper exploits the results of Lloyd (1952) to obtain optimal linear estimators based on order statistics of location or/and scale parameter (s) of a continuous univariate data distribution. Related “linear empirical Bayes” estimators which can be applied in the absence of the exact knowledge of the optimal estimators are also developed. This approach allows one to extend the results to the case of censored samples.  相似文献   

6.
Abstract

The problem of “optimum stratification” was discussed by the firstmentioned author in an earlier paper (1). The discussion in that paper was limited to sampling from an infinite population, represented by a density function f{y). The optimum points yi of stratification, for estimating the mean µ using were determined by solving the equations: which gives the stratification points Yi that minimize the sampling variance V y (provided the usual condition for the minimum is fulfilled)  相似文献   

7.
In this paper, we develop a long memory orthogonal factor (LMOF) multivariate volatility model for forecasting the covariance matrix of financial asset returns. We evaluate the LMOF model using the volatility timing framework of Fleming et al. [J. Finance, 2001, 56, 329–352] and compare its performance with that of both a static investment strategy based on the unconditional covariance matrix and a range of dynamic investment strategies based on existing short memory and long memory multivariate conditional volatility models. We show that investors should be willing to pay to switch from the static strategy to a dynamic volatility timing strategy and that, among the dynamic strategies, the LMOF model consistently produces forecasts of the covariance matrix that are economically more useful than those produced by the other multivariate conditional volatility models, both short memory and long memory. Moreover, we show that combining long memory volatility with the factor structure yields better results than employing either long memory volatility or the factor structure alone. The factor structure also significantly reduces transaction costs, thus increasing the feasibility of dynamic volatility timing strategies in practice. Our results are robust to estimation error in expected returns, the choice of risk aversion coefficient, the estimation window length and sub-period analysis.  相似文献   

8.
Abstract

It is shown how one may construct tests and confidence regions for the unknown structural parameters in empirical linear Bayes estimation problems. The case of the collateral units having varying “designs” (i.e. regressor and covariance matrices) may be treated under the assumption that the design variables independently follow a common statistical law. The results are of an asymptotic nature.  相似文献   

9.
The use of improved covariance matrix estimators as an alternative to the sample estimator is considered an important approach for enhancing portfolio optimization. Here we empirically compare the performance of nine improved covariance estimation procedures using daily returns of 90 highly capitalized US stocks for the period 1997–2007. We find that the usefulness of covariance matrix estimators strongly depends on the ratio between the estimation period T and the number of stocks N, on the presence or absence of short selling, and on the performance metric considered. When short selling is allowed, several estimation methods achieve a realized risk that is significantly smaller than that obtained with the sample covariance method. This is particularly true when T/N is close to one. Moreover, many estimators reduce the fraction of negative portfolio weights, while little improvement is achieved in the degree of diversification. On the contrary, when short selling is not allowed and T?>?N, the considered methods are unable to outperform the sample covariance in terms of realized risk, but can give much more diversified portfolios than that obtained with the sample covariance. When T?<?N, the use of the sample covariance matrix and of the pseudo-inverse gives portfolios with very poor performance.  相似文献   

10.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

11.
Abstract

One of the acknowledged difficulties with pricing immediate annuities is that underwriting the annuitantis life is the exception rather than the rule. In the absence of underwriting, the price paid for a life-contingent annuity is the same for all sales at a given age. This exposes the market (insurance company and potential policyholder alike) to antiselection. The insurance company worries that only the healthiest people choose a life-contingent annuity and therefore adjust mortality accordingly. The potential policyholders worry that they are not being compensated for their relatively poor health and choose not to purchase what would otherwise be a very beneficial product.

This paper develops a model of underlying, unobserved health. Health is a state variable that follows a first-order Markov process. An individual reaches the state “death” either by accident from any health state or by progressively declining health state. Health state is one-dimensional, in the sense that health can either “improve” or “deteriorate” by moving farther from or close to the “death” state, respectively. The probability of death in a given year is a function of health state, not of age. Therefore, in this model a person is exactly as old as he or she feels.

I first demonstrate that a multistate, ageless Markov model can match the mortality patterns in the common annuity mortality tables. The model is extended to consider several types of mortality improvements: permanent through decreasing probability of deteriorating health, temporary through improved distribution of initial health state, and plateau through the effects of past health improvements.

I then construct an economic model of optimal policyholder behavior, assuming that the policyholder either knows his or her health state or has some limited information. the value of mortality risk transfer through purchasing a life-contingent annuity is estimated for each health state under various risk-aversion parameters. Given the economic model for optimal purchasing of annuities, the value of underwriting (limited information about policyholder health state) is demonstrated.  相似文献   

12.
13.
Recently there has been a growing interest in the scenario model of covariance as an alternative to the one-factor or many-factor models. We show how the covariance matrix resulting from the scenario model can easily be made diagonal by adding new variables linearly related to the amounts invested; note the meanings of these new variables; note how portfolio variance divides itself into “within scenario” and “between scenario” variances; and extend the results to models in which scenarios and factors both appear where factor distributions and effects may or may not be scenario sensitive.  相似文献   

14.
Recent literature suggests that optimal asset‐allocation models struggle to consistently outperform the 1/N naïve diversification strategy, which highlights estimation‐risk concerns. We propose a dichotomous classification of asset‐allocation models based on which elements of the inverse covariance matrix that a model uses: diagonal only versus full matrix. We argue that parsimonious diagonal‐only strategies, which use limited information such as volatility or idiosyncratic volatility, are likely to offer a good tradeoff between incorporating limited information while mitigating estimation risk. Evaluating five sets of portfolios over 1926–2012, we find that 1/N is generally not optimal when compared with these diagonal strategies.  相似文献   

15.
Abstract

It was the Swiss actuary Chr. Moser who, in lectures at Bern University at the turn of the century, gave the name “self-renewing aggregate” to what Vajda (1947) has called the “unstationary community” of lives, namely where deaths at any epoch are immediately replaced by an equivalent number of births. It was Moser too (1926) who coined the expression “steady state” for the stationary community in which the age distribution at any time follows the life table (King, 1887). With such a distinguished actuarial history, excellently summarized by Saxer (1958, Ch. IV), it behoves every actuary to know at least the definitions and modus operandi of today's so-called renewal (point), or recurrent event, processes.  相似文献   

16.
Recent theory has demonstrated that the Arbitrage Pricing Model with K factors critically depends on whether K eigenvalues dominate the covariance matrix of returns as the number of securities grows large. The purpose of this paper is to test whether sample covariance matrices can be characterized as having K large eigenvalues. Using all available data on the 1983 CRSP tapes, we compute sample covariance matrices of returns in sequentially larger portfolios of securities. Analyzing their eigenvalues, we find evidence that one eigenvalue dominates the covariance matrix indicating that a one-factor model may describe security pricing. We also find that, for values of K larger than one, there is no obvious way to choose the number of factors. Nevertheless, we find that while only the first eigenvalue dominates the matrix, the first five eigenvalues are growing more distinct.  相似文献   

17.
Can a Coherent Risk Measure Be Too Subadditive?   总被引:1,自引:0,他引:1  
We consider the problem of determining appropriate solvency capital requirements for an insurance company or a financial institution. We demonstrate that the subadditivity condition that is often imposed on solvency capital principles can lead to the undesirable situation where the shortfall risk increases by a merger. We propose to complement the subadditivity condition by a regulator's condition. We find that for an explicitly specified confidence level, the Value‐at‐Risk satisfies the regulator's condition and is the “most efficient” capital requirement in the sense that it minimizes some reasonable cost function. Within the class of concave distortion risk measures, of which the elements, in contrast to the Value‐at‐Risk, exhibit the subadditivity property, we find that, again for an explicitly specified confidence level, the Tail‐Value‐at‐Risk is the optimal capital requirement satisfying the regulator's condition.  相似文献   

18.
Abstract

In this paper, after having defined the duration for a “generic” life insurance contract, we bring out some of its properties. We also prove that, in some cases, duration is a natural extension of well-known duration indices.  相似文献   

19.
This article incorporates an information structure with partial information into the canonical hold‐up problem. The optimal information structure balances the tradeoff between ex ante efficiency (the “information rent” effect) and ex post efficiency (the “bargaining disagreement” effect). With one‐shot bargaining, it occurs at an intermediate level of information asymmetry; when there is repeated bargaining, it is attained with perfect asymmetry. Asymmetric information, the parameter that is frequently ignored in the literature, turns out to be an important welfare instrument for the hold‐up problem. Our results therefore provide a basis for institutional design regarding the optimal control of information flow.  相似文献   

20.
ABSTRACT

This study reveals the evolution of the Belt and Road trade network, and discusses the determinant factors of trade relationships by employing network analysis methods. Using 65 countries’ trade flow data in 2012, 2014, and 2016, the network indices show that the Belt and Road initiative has improved trade network’s connectivity significantly. The results of blockmodels show that the trade network can be partitioned into four blocks, including “Dominators,” “Brokers,” “Generators,” and “Receivers.” Furthermore, the spatial proximity, cultural differences, trade agreements, economic distance, and trade facilitations have significant impacts on the formation of trade network according to the QAP model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号