首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

The adoption of Statement of Financial Accounting Standards No. 97 (SFAS 97) eliminated the “lock-in” concept introduced in SFAS 60. Since many of the actuarial assumptions used in the calculation of the deferred acquisition cost (DAC) asset are difficult to predict over an extended period of time, “dynamic unlocking” was a sensible solution. Although this “dynamic unlocking” keeps the assumptions in line with recent experience, it comes at a cost—increased volatility of GAAP earnings. Some of the causes of this volatility are warranted since it accentuates the effects on earnings due to certain changes in the underlying experience. Other causes of this volatility may be unwarranted because of a misapplication of the principles underlying SFAS 97 and SFAS 120 or the manner in which changes in experience were reflected. In addition, most analysts expect the amortization of deferred acquisition costs to increase when earnings are better than expected. Conversely, analysts expect the amortization of deferred acquisition costs to decrease when earnings are worse than expected. Often the amortization of deferred acquisition costs behaves in a manner contrary to their expectations. This article analyzes what causes this volatility, explains why the amortization can behave contrary to expectations, and suggests several techniques for minimizing these unwarranted results.  相似文献   

2.
Abstract

The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent times because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Despite the number of papers published in recent years, a comprehensive review has not yet been developed.

This paper attempts to be the starting point for that review, highlighting the importance of recently published research—most of the references cited span the last 10 years—and covering the main methodologies that have been applied to the projection of mortality rates in the United Kingdom and the United States. A comparative review of techniques used in official population projections, actuarial applications, and the most influential scientific approaches is provided. In the course of the review an attempt is made to identify common themes and similarities in methods and results.

In both official projections and actuarial applications there is some evidence of systematic overestimation of mortality rates. Models developed by academic researchers seem to reveal a trade-off between the plausibility of the projected age pattern and the ease of measuring the uncertainty involved. The Lee-Carter model is one approach that appears to solve this apparent dilemma.

There is a broad consensus across the resulting projections: (1) an approximately log-linear relationship between mortality rates and time, (2) decreasing improvements according to age, and (3) an increasing trend in the relative rate of mortality change over age. In addition, evidence suggests that excessive reliance on expert opinion—present to some extent in all methods—has led to systematic underestimation of mortality improvements.  相似文献   

3.
Most discussions of capital budgeting take for granted that discounted cash flow (DCF) and real options valuation (ROV) are very different methods that are meant to be applied in different circumstances. Such discussions also typically assume that DCF is “easy” and ROV is “hard”—or at least dauntingly unfamiliar—and that, mainly for this reason, managers often use DCF and rarely ROV. This paper argues that all three assumptions are wrong or at least seriously misleading. DCF and ROV both assign a present value to risky future cash flows. DCF entails discounting expected future cash flows at the expected return on an asset of comparable risk. ROV uses “risk‐neutral” valuation, which means computing expected cash flows based on “risk‐neutral” probabilities and discounting these flows at the risk‐free rate. Using a series of single‐period examples, the author demonstrates that both methods, when done correctly, should provide the same answer. Moreover, in most ROV applications—those where there is no forward price or “replicating portfolio” of traded assets—a “preliminary” DCF valuation is required to perform the risk‐neutral valuation. So why use ROV at all? In cases where project risk and the discount rates are expected to change over time, the risk‐neutral ROV approach will be easier to implement than DCF (since adjusting cash flow probabilities is more straightforward than adjusting discount rates). The author uses multi‐period examples to illustrate further both the simplicity of ROV and the strong assumptions required for a typical DCF valuation. But the simplicity that results from discounting with risk‐free rates is not the only benefit of using ROV instead of—or together with—traditional DCF. The use of formal ROV techniques may also encourage managers to think more broadly about the flexibility that is (or can be) built into future business decisions, and thus to choose from a different set of possible investments. To the extent that managers who use ROV have effectively adopted a different business model, there is a real and important difference between the two valuation techniques. Consistent with this possibility, much of the evidence from both surveys and academic studies of managerial behavior and market pricing suggests that managers and investors implicitly take account of real options when making investment decisions.  相似文献   

4.
Most of the foundations of valuation theory have been designed for use in developed markets. Because of the greater, and in some cases different, risks associated with emerging markets (although recent experience might suggest otherwise), investors and corporate managers are often uncomfortable using traditional methods. The typical way of capturing emerging-market risks is to increase the discount rate in the standard valuation model. But, as the authors argue, such adjustments have the effect of undermining some of the basic assumptions of the CAPM-based discounted cash flow model. The standard theory of capital budgeting suggests that estimates of unconditional expected cash flows should be discounted at CAPM discount rates (or betas) that reflect only “systematic,” or “nondiversifiable,” market-wide risks. In practice, however, analysts tend to take what are really estimates of “conditional” expected cash flows—that is, conditional on the firm or its country avoiding a crisis—and discount them at higher rates that reflect not only systematic risks, but diversifiable risks that typically involve a higher probability of crisis-driven costs of default. But there is almost no basis in theory for the size of the increases in discount rates. In this article, the authors propose that analysts in emerging markets avoid this discount rate problem by using simulation techniques to capture emerging-market risks in their estimates of unconditional expected cash flows—in other words, estimates that directly incorporate the possibility of an emerging-market crisis and its consequences. Having produced such estimates, analysts can then discount them using the standard Global CAPM.  相似文献   

5.
6.
Summary

An estimator which is a linear function of the observations and which minimises the expected square error within the class of linear estimators is called an “optimal linear” estimator. Such an estimator may also be regarded as a “linear Bayes” estimator in the spirit of Hartigan (1969). Optimal linear estimators of the unknown mean of a given data distribution have been described by various authors; corresponding “linear empirical Bayes” estimators have also been developed.

The present paper exploits the results of Lloyd (1952) to obtain optimal linear estimators based on order statistics of location or/and scale parameter (s) of a continuous univariate data distribution. Related “linear empirical Bayes” estimators which can be applied in the absence of the exact knowledge of the optimal estimators are also developed. This approach allows one to extend the results to the case of censored samples.  相似文献   

7.
Abstract

One of the acknowledged difficulties with pricing immediate annuities is that underwriting the annuitantis life is the exception rather than the rule. In the absence of underwriting, the price paid for a life-contingent annuity is the same for all sales at a given age. This exposes the market (insurance company and potential policyholder alike) to antiselection. The insurance company worries that only the healthiest people choose a life-contingent annuity and therefore adjust mortality accordingly. The potential policyholders worry that they are not being compensated for their relatively poor health and choose not to purchase what would otherwise be a very beneficial product.

This paper develops a model of underlying, unobserved health. Health is a state variable that follows a first-order Markov process. An individual reaches the state “death” either by accident from any health state or by progressively declining health state. Health state is one-dimensional, in the sense that health can either “improve” or “deteriorate” by moving farther from or close to the “death” state, respectively. The probability of death in a given year is a function of health state, not of age. Therefore, in this model a person is exactly as old as he or she feels.

I first demonstrate that a multistate, ageless Markov model can match the mortality patterns in the common annuity mortality tables. The model is extended to consider several types of mortality improvements: permanent through decreasing probability of deteriorating health, temporary through improved distribution of initial health state, and plateau through the effects of past health improvements.

I then construct an economic model of optimal policyholder behavior, assuming that the policyholder either knows his or her health state or has some limited information. the value of mortality risk transfer through purchasing a life-contingent annuity is estimated for each health state under various risk-aversion parameters. Given the economic model for optimal purchasing of annuities, the value of underwriting (limited information about policyholder health state) is demonstrated.  相似文献   

8.
Abstract

This paper enumerates questions about the assumptions, methods, and interpretation of mortality projections that are made for policy applications. These questions are of practical concern to people who make and use projections, and cover a much narrower scope than the review by Tuljapurkar and Boe (pp. 13–47). The objective in circulating this paper was to provide an initial focus for participants at the SOA meeting.  相似文献   

9.
Abstract

The objective of this paper is to investigate dynamic properties of age trajectories of physiological indices and their effects on mortality risk and longevity using longitudinal data on more than 5,000 individuals collected in biennial examinations of the Framingham Heart Study (FHS) original cohort during about 50 subsequent years of follow-up. We first performed empirical analyses of the FHS longitudinal data. We evaluated average age trajectories of indices describing physiological states for different groups of individuals and established their connections with mortality risk. These indices include body mass index, diastolic blood pressure, pulse pressure, pulse rate, level of blood glucose, hematocrit, and serum cholesterol. To be able to investigate dynamic mechanisms responsible for changes in the aging human organisms using available longitudinal data, we further developed a stochastic process model of human mortality and aging, by including in it the notions of “physiological norms,” “allostatic adaptation and allostatic load,” “stress resistance,” and other characteristics associated with the internal process of aging and the effects of external disturbances. In this model, the persistent deviation of physiological indices from their normal values contributes to an increase in morbidity and mortality risks. We used the stochastic process model in the statistical analyses of longitudinal FHS data. We found that different indices have different average age patterns and different dynamic properties. We also found that age trajectories of long-lived individuals differ from those of the shorter-lived members of the FHS original cohort for both sexes. Using methods of statistical modeling, we evaluated “normal” age trajectories of physiological indices and the dynamic effects of allostatic adaptation. The model allows for evaluating average patterns of aging-related decline in stress resistance. This effect is captured by the narrowing of the U-shaped mortality risk (considered a function of physiological state) with age. We showed that individual indices and their rates of change with age, as well as other measures of individual variability, manifested during the life course are important contributors to mortality risks. The advantages and limitations of the approach are discussed.  相似文献   

10.
Summary

The King-Hardy method and the Hardy method mentioned in e.g. Miller (1946, 6.4) are old, well known moment methods in Gompertz-Makeham graduation of mortality. They have poor efficiencies (Forsén, 1977, App. F.6 and F.5), measured in terms of asymptotic variance, relative to the modified minimum chi-square method which is optimal (Hoem, 1972, 6.2, 7.2). This paper shows that a modification of the moment methods mentioned, gives a method that is “almost” as efficient as the best method available, and it is easier to use in practice.  相似文献   

11.
Abstract

This paper introduces nonlinear threshold time series modeling techniques that actuaries can use in pricing insurance products, analyzing the results of experience studies, and forecasting actuarial assumptions. Basic “self-exciting” threshold autoregressive (SETAR) models, as well as heteroscedastic and multivariate SETAR processes, are discussed. Modeling techniques for each class of models are illustrated through actuarial examples. The methods that are described in this paper have the advantage of being direct and transparent. The sequential and iterative steps of tentative specification, estimation, and diagnostic checking parallel those of the orthodox Box-Jenkins approach for univariate time series analysis.  相似文献   

12.
Abstract

This paper analyzes in some detail potential impacts on economic security programs—government, employer, and individual—that the aging of the baby boom generation may create. It begins by defining what is meant by “population aging” and concludes that fertility shifts are more important than improving life expectancy. It also argues that calling the baby boom the “postwar baby boom” is inaccurate and will lead to missed targets for product development and marketing. Finally, this section of the paper notes that the most rapidly growing segment of the population will be the oldest old—those age 85 and over, who will also put the greatest stress on the provision of health care and retirement income security.

The paper then looks at other demographic shifts of importance, in particular female labor force participation rates. The impact of shifting demographics is reviewed for each sponsor of economic security programs: the government (health care and social security); the employer (pension plans and group benefits); and the individual. Points of concern and offsetting opportunities for the insurance industry are noted. Finally, the paper looks at whether we will be able to “afford” the sudden retirement of the baby boom. The conclusion is that this will be affordable if we can convince a portion of the labor force to stay active longer, and if we have healthy productivity growth rates. The problems of an aging population can all be viewed as opportunities for those who have the map.  相似文献   

13.
Abstract

This article investigates performance of interval estimators of various actuarial risk measures. We consider the following risk measures: proportional hazards transform (PHT), Wang transform (WT), value-at-risk (VaR), and conditional tail expectation (CTE). Confidence intervals for these measures are constructed by applying nonparametric approaches (empirical and bootstrap), the strict parametric approach (based on the maximum likelihood estimators), and robust parametric procedures (based on trimmed means).

Using Monte Carlo simulations, we compare the average lengths and proportions of coverage (of the true measure) of the intervals under two data-generating scenarios: “clean” data and “contaminated” data. In the “clean” case, data sets are generated by the following (similar shape) parametric families: exponential, Pareto, and lognormal. Parameters of these distributions are selected so that all three families are equally risky with respect to a fixed risk measure. In the “contaminated” case, the “clean” data sets from these distributions are mixed with a small fraction of unusual observations (outliers). It is found that approximate knowledge of the underlying distribution combined with a sufficiently robust estimator (designed for that distribution) yields intervals with satisfactory performance under both scenarios.  相似文献   

14.
Well‐functioning financial markets are key to efficient resource allocation in a capitalist economy. While many managers express reservations about the accuracy of stock prices, most academics and practitioners agree that markets are efficient by some reasonable operational criterion. But if standard capital markets theory provides reasonably good predictions under “normal” circumstances, researchers have also discovered a number of “anomalies”—cases where the empirical data appear sharply at odds with the theory. Most notable are the occasional bursts of extreme stock price volatility (including the recent boom‐and‐bust cycle in the NASDAQ) and the limited success of the Capital Asset Pricing Model in accounting for the actual risk‐return behavior of stocks. This article addresses the question of how the market's efficiency arises. The central message is that managers can better understand markets as a complex adaptive system. Such systems start with a “heterogeneous” group of investors, whose interaction leads to “self‐organization” into groups with different investment styles. In contrast to market efficiency, where “marginal” investors are all assumed to be rational and well‐informed, the interaction of investors with different “decision rules” in a complex adaptive system creates a market that has properties and characteristics distinct from the individuals it comprises. For example, simulations of the behavior of complex adaptive systems suggest that, in most cases, the collective market will prove to be smarter than the average investor. But, on occasion, herding behavior by investors leads to “imbalances”—and, hence, to events like the crash of '87 and the recent plunge in the NASDAQ. In addition to its grounding in more realistic assumptions about the behavior of individual investors, the new model of complex adaptive systems offers predictions that are in some respects more consistent with empirical findings. Most important, the new model accommodates larger‐than‐normal stock price volatility (in statistician's terms, “fat‐tailed” distributions of prices) far more readily than standard efficient market theory. And to the extent that it does a better job of explaining volatility, this new model of investor behavior is likely to have implications for two key areas of corporate financial practice: risk management and investor relations. But even so, the new model leaves one of the main premises of modern finance theory largely intact–that the most reliable basis for valuing a company's stock is its discounted cash flow.  相似文献   

15.
The former dean of the University of Virginia's Darden School explores how business schools must adapt to prepare future business leaders to assume the leadership responsibilities necessary to respond effectively to financial crises. The article begins with a statement by Milton Friedman and Anna Schwarz in their Monetary History of the United States about the failure of U.S. policy makers to prevent the collapse of the U.S. banking system during the Great Depression. Then turning to the crisis of 2008, the author draws on recent accounts of the leadership—both effective and ineffective—provided by policymakers to support Friedman and Schwartz's contention that the success of countries in responding to crises “depends on the presence of one or more outstanding individuals willing to assume responsibility and leadership.” After citing Nassim Taleb's characterization of the financial system as inherently “fragile,” the article offers a number of insights about the kind of leadership that is likely to prove effective in protecting such systems. Using the responses of policymakers like Bernanke, Paulson, and Geithner as examples, the author observes that successful leaders rank priorities and set direction, mobilize collective action, choose whether and how to use the “panoply of tools” at their disposal, and attempt to respond in a comprehensive, coordinated way to all aspects of a crisis using a flexible set of approaches and methods that he identifies as “Ad Hoc‐racy.” With such insights in mind, the author goes on to suggest that changes in current research and teaching about leadership are likely to take the form of the following six “stretches”:

16.
All social practices reproduce certain taken-for-granteds about what exists. Constructions of existence (ontology) go together with notions of what can be known of these things (epistemology), and how such knowledge might be produced (methodology)—along with questions of value or ethics. Increasingly, reflective practitioners—whatever their practice—are exploring the assumptions they ‘put to work’ and the conventions they reproduce. Questions are being asked about how to ‘cope’ with change in a postmodern world, and ethical issues are gaining more widespread attention. If we look at these constructions then we often find social practices: (a) give central significance to the presumption of a single real world; (b) centre a knowing subject who should strive to be separate from knowable objects, i.e. people and things that make up the world; (c) a knowing subject who can produce knowledge (about the real world) that is probably true and a matter of fact rather than value (including ethics). Social practices of this sort often produce a right–wrong debate in which one individual or group imposes their ‘facts’ (and values) on others. Further they often do so using claims to greater or better knowledge (e.g. science, facts …) as their justifications.We use the term “relational constructionism” as a summary reference to certain assumptions and arguments that define our “thought style”. They are as follows: fact and value are joined (rather than separate); the knower and the known—self and other—are co-constructed; knowledge is always a social affair—a local–historical–cultural (social) co-construction made in conversation, in other kinds of action, and in the artefacts of human activities (‘frozen’ actions so to speak), and so; multiple inter-actions simultaneously (re)produce multiple local cultures and relations, this said; relations may impose one local reality (be mono-logical) or give space to multiplicity (be multi-logical). In this view, the received view of science is but one (socially constructed) way of world making, as is social constructionism, and different ways have different—and very real—consequences.In this paper, we take our relational constructionist style of thinking to examine differing constructions of foot and mouth disease (FMD)1 in the UK. We do so in order to highlight the dominant relationship construction. We argue that this could be metaphorised as ‘accounting in Babel’—as multiple competing monologues—many of which remained very local and subordinated by a dominant logic. However, from a relational constructionist point of view, it is also possible to argue that social accounting can be done in a more multi-logical way that gives space to dialogue and multiplicity. In the present (relational constructionist) view, accounting is no longer ‘just’ a question of knowledge and methodology but also a question of value and power. To render accounting practices more ethical they must be more multi-voiced and enable ‘power to’ rather than ‘power over’.  相似文献   

17.
Abstract

At, or about, the age of retirement, most individuals must decide what additional fraction of their marketable wealth, if any, should be annuitized. Annuitization means purchasing a nonrefundable life annuity from an insurance company, which then guarantees a lifelong consumption stream that cannot be outlived. The decision of whether or not to annuitize additional liquid assets is a difficult one, since it is clearly irreversible and can prove costly in hindsight. Obviously, for a large group of people, the bulk of financial wealth is forcefully annuitized, for example, company pensions and social security. For others, especially as it pertains to personal pension plans, such as 401(k), 403(b), and IRA plans as well as variable annuity contracts, there is much discretion in the matter.

The purpose of this paper is to focus on the question of when and if to annuitize. Specifically, my objective is to provide practical advice aimed at individual retirees and their advisors. My main conclusions are as follows:

? Annuitization of assets provides unique and valuable longevity insurance and should be actively encouraged at higher ages. Standard microeconomic utility-based arguments indicate that consumers would be willing to pay a substantial “loading” in order to gain access to a life annuity.

? The large adverse selection costs associated with life annuities, which range from 10% to 20%, might serve as a strong deterrent to full annuitization.

? Retirees with a (strong) bequest motive might be inclined to self-annuitize during the early stages of retirement. Indeed, it appears that most individuals—faced with expensive annuity products—can effectively “beat” the rate of return from a fixed immediate annuity until age 75?80. I call this strategy consume term and invest the difference.

? Variable immediate annuities (VIAs) combine equity market participation together with longevity insurance. This financial product is currently underutilized (and not available in certain jurisdictions) and can only grow in popularity.  相似文献   

18.
In this panel that also took place at the recent SASB Symposium, senior representatives of four leading institutional investors—BlackRock, Ca lPERS, Ca lSTRS, and Wells Fargo—emphasize the relevance of ESG data for “mainstream” investors and the importance of integrating it with traditional fundamental analysis rather than viewing it as a separate set of reporting responsibilities. Moreover, the logical place for integrating ESG information is in the most forwardlooking section of financial reports, the “Management Discussion and Analysis,” or “MD&A,” which would be strengthened by including more and better information about the companies' ESG risks and initiatives. Some panelists noted that ESG information is likely to be valued by investors because of its ability to shed light on “idiosyncratic” risks that are not captured by the traditional risk factors that have long dominated asset pricing models. Others described ESG information as helpful in evaluating and comparing the “quality” of management in portfolio companies. But all agreed that efforts like the SASB's to standardize ESG data are essential to successful integration of that data into the decision‐making process of large mainstream investors. And as the panelists also made clear, there is an important generational component to the growing movement to integrate ESG into mainstream investing, with Millenials—and particularly Millenial women—showing especially strong support.  相似文献   

19.
1. Introduction.

The two cases where a normal distribution is “truncated” at a known point have been treated by R. A. Fisher (1) and W. L. Stevens (2), respectively. Fisher treated the case in which all record is omitted of observations below a given value, while Stevens treated the case in which the frequency of observations below a given value is recorded but the individual values of these observations are not specified. In both cases the distribution is usually termed truncated. In the first case, admittedly, the observations form a random sample drawn from an incomplete normal distribution, but in the second case we sample from a complete normal distribution in which the obtainable information in a sense has been censored, either by nature or by ourselves. To distinguish between the two cases the distributions will be called truncated and censored 1 The term “censored” was suggested to me by Mr J. E. Kerrich. , respectively. The term “point of truncation” will be used for both.  相似文献   

20.
Examined in this paper is the choice between private and public incorporation of an asset for an entrepreneur (asset owner) who hires a manager with superior information about the asset's return distribution. Public sale of equity is shown to be the preferred alternative when (a) capital market issue costs are low or (b) the assest's idiosyncratic risk is high and the owner is either sufficiently risk averse or sufficiently “optimistic” about the asset's expected return. Thus, those assets deemed most valuable by their owners will tend to be publicly incorporated. The paper also explores the impact of incorporation mode—private versus public—and information structure on the firm's investment policy and ownership distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号