共查询到20条相似文献,搜索用时 15 毫秒
1.
C. Allen Pinkham ASA MAAA MA Marianne E. Cumming MSc MD Howard Minuk MD FRCPC 《North American actuarial journal : NAAJ》2013,17(3):7-16
Abstract Metabolic syndrome and its association with mortality have not been studied in insured lives populations. The Swiss Re Study evaluated metabolic syndrome prevalence and associated mortality from all causes and circulatory disease in a cohort of 35,470 predominantly healthy individuals, aged 18–83 years, who were issued life insurance policies between 1986 and 1997. Metabolic syndrome was defined using the National Cholesterol Education Program (NCEP) Expert Panel Adult Treatment Panel (ATP) III guidelines. The NCEP obesity criteria were modified with a prediction equation using body mass index, gender, and age substituted for waist circumference. Adjustments also were made for nonfasting triglyceride and blood glucose values. Risk ratios for policyholders identified with metabolic syndrome were 1.16 (P = .156) for mortality from all causes and 1.45 (P = .080) for mortality from circulatory disease compared with individuals without the syndrome. Risk was proportional to the number of components, or score, of the metabolic syndrome present. Risk ratios for metabolic syndrome score were 1.14 (P < .001) for mortality from all causes and 1.38 (P < .001) for mortality from circulatory disease compared with individuals without metabolic syndrome factors. In both all-cause and circulatory death models, relative risk was highest for the blood pressure risk factor. Based on a modified NCEP definition, increased mortality risk is associated with metabolic syndrome in an insured lives cohort and has life insurance mortality pricing implications. 相似文献
2.
3.
4.
5.
JOHN J. MAHER TARUN K. SEN 《International Journal of Intelligent Systems in Accounting, Finance & Management》1997,6(1):59-72
Bond rating agencies examine the financial outlook of a company and the characteristics of a bond issue and assign a rating that indicates an independent assessment of the degree of default risk associated with the firm’s bonds. Predicting this bond rating has been of interest to potential investors as well as to the firm. Prior research in this area has primarily relied upon traditional statistical methods to develop models with reasonably good prediction accuracy. This article utilizes a neural network approach to modeling the bond rating process in an attempt to increase the overall prediction accuracy of the models. A comparison is made to a more traditional logistic regression approach to classification prediction. The results indicate that the neural networks-based model performs significantly better than the logistic regression model for classifying a holdout sample of newly issued bonds in the 1990–92 period. A potential drawback to a neural network approach is a tendency to overfit the data which could negatively affect the model’s generalizability. This study carefully controls for overfitting and obtains significant improvement in bond rating prediction compared to the logistic regression approach. © 1997 by John Wiley & Sons, Ltd. 相似文献
6.
T. E. Cooke 《Accounting & Business Research》2013,43(3):209-224
A problem that sometimes occurs in undertaking empirical research in accounting and finance is that the theoretically correct form of the relation between the dependent and independent variables is not known, although often thought or assumed to be monotonic. In addition, transformations of disclosure measures and independent variables are proxies for underlying constructs and hence, while theory may specify a functional form for the underlying theoretical construct, it is unlikely to hold for empirical proxies. In order to cope with this problem a number of accounting disclosure studies have transformed variables so that the statistical analysis is more meaningful. One approach that has been advocated in such circumstances is to rank the data and then apply regression techniques, a method that has been used recently in a number of accounting disclosure studies. This paper reviews a number of transformations including the Rank Regression procedure. Because of the inherent properties of ranks and their use in regression analysis, an extension is proposed that provides an alternative mapping that replaces the data with their normal scores. The normal scores approach retains the advantages of using ranks but has other beneficial characteristics, particularly in hypothesis testing. Regressions based on untransformed data, on the log odds ratio of the dependent variable, on ranks and regression using normal scores, are applied to data on the disclosure of information in the annual reports of companies in Japan and Saudi Arabia. It is found that regression using normal scores has some advantages over ranks that, in part, depend on the structure of the data. However, the case studies demonstrate that no one procedure is best but that multiple approaches are helpful to ensure the results are robust across methods. 相似文献
7.
从现金流风险角度出发,对企业财务困境进行预警研究:首先构建一个基于企业内部环境、宏观经济政策、货币政策及财政政策等因素的CFaR模型,识别出期望现金流及风险现金流;然后以这两个指标作为预警变量,构建一个二元Logistic财务困境预警模型;最后选取27家中国证券市场中代表陷入财务困境的ST公司及配对的财务良好的非ST公司作为样本进行实证研究。结果表明,所构建的CFaR模型能较好地度量两类上市公司的现金流状况,且两类公司的期望现金流和风险现金流水平存在显著的差异;二元Logistic预警模型能较好地实现对上市公司财务风险的预警,对两类公司的预警正确率分别达到85.2%和81.5%。 相似文献
8.
构建二分类 Logistic信用风险评估模型,运用光大银行某分行样本数据,评估商业银行互联网金融个人小额贷款信用风险。结果显示:客户性别、学历、年龄、收入、职业、属地等因素对个人小额贷款信用风险影响显著。其中,年龄、收入、学历等与客户信用等级呈正向关系,女性信用风险显著低于男性,持有信用卡、存贷比越低的客户其信用等级越高;一、二线城市客户的履约率普遍高于县地级市客户的履约率。鉴此,商业银行应对互联网金融个人小额贷款信用风险进行有效规避和分散。 相似文献
9.
10.
加强高收入者的个人所得税征管、促进纳税遵从一直是税务部门的重要工作之一。利用多元有序因变量Logistic回归模型,以江苏某市2010年度年所得12万元以上个人所得税纳税申报的数据为样本,以"应补税额"等级的五分类有序变量为因变量,以纳税申报表中纳税人的年龄、应纳税所得额、应纳税额、性别、职业大类、行业大类6个影响因素为自变量,进行多元有序因变量的Logistic回归分析,研究结果可为税务系统加强年所得12万元以上高收入者个人所得税的申报与征管、促进纳税遵从提供新的思路。 相似文献
11.
Longevity risk is among the most important factors to consider for pricing and risk management of longevity products. Past improvements in mortality over many years, and the uncertainty of these improvements, have attracted the attention of experts, both practitioners and academics. Since aggregate mortality rates reflect underlying trends in causes of death, insurers and demographers are increasingly considering cause-of-death data to better understand risks in their mortality assumptions. The relative importance of causes of death has changed over many years. As one cause reduces, others increase or decrease. The dependence between mortality for different causes of death is important when projecting future mortality. However, for scenario analysis based on causes of death, the assumption usually made is that causes of death are independent. Recent models, in the form of Vector Error Correction Models (VECMs), have been developed for multivariate dynamic systems and capture time dependency with common stochastic trends. These models include long-run stationary relations between the variables and thus allow a better understanding of the nature of this dependence. This article applies VECMs to cause-of-death mortality rates to assess the dependence between these competing risks. We analyze the five main causes of death in Switzerland. Our analysis confirms the existence of a long-run stationary relationship between these five causes. This estimated relationship is then used to forecast mortality rates, which are shown to be an improvement over forecasts from more traditional ARIMA processes, which do not allow for cause-of-death dependencies. 相似文献
12.
13.
14.
Mortality improvements pose a challenge for the life annuity business. For the management of such portfolios, it is important to forecast future mortality rates. Standard models for mortality forecasting assume that the force of mortality at age x in calendar year t is of the form exp, where the dynamics of the time index is described by a random walk with drift. Starting from such a best estimate of future mortality (called second-order mortality basis in actuarial science), the paper explains how to determine a conservative life table serving as first-order mortality basis. The idea is to replace the stochastic projected life table with a deterministic conservative one, and to assume mutual independence for the remaining life times. The paper then studies the distribution of the present value of the payments made to a closed group of annuitants. It turns out that De Pril–Panjer algorithm can be used for that purpose under first-order mortality basis. The connection with ruin probabilities is briefly discussed. An inequality between the distribution of the present value of future annuity payments under first-order and second-order mortality basis is provided, which allows to link value-at-risk computed under these two sets of assumptions. A numerical example performed on Belgian mortality statistics illustrates how the approach proposed in this paper can be implemented in practice. 相似文献
15.
Abstract This case study illustrates the analysis of two possible regression models for bivariate claims data. Estimates or forecasts of loss distributions under these two models are developed using two methods of analysis: (1) maximum likelihood estimation and (2) the Bayesian method. These methods are applied to two data sets consisting of 24 and 1,500 paired observations, respectively. The Bayesian analyses are implemented using Markov chain Monte Carlo via WinBUGS, as discussed in Scollnik (2001). A comparison of the analyses reveals that forecasted total losses can be dramatically underestimated by the maximum likelihood estimation method because it ignores the inherent parameter uncertainty. 相似文献
16.
Recently a large number of new mortality models have been proposed to analyze historic mortality rates and project them into the future. Many of these suffer from being over-parametrized or have terms added in an ad hoc manner that cannot be justified in terms of demographic significance. In addition, poor specification of a model can lead to period effects in the data being wrongly attributed to cohort effects, which results in the model making implausible projections. We present a general procedure for constructing mortality models using a combination of a toolkit of functions and expert judgment. By following the general procedure, it is possible to identify sequentially every significant demographic feature in the data and give it a parametric structural form. We demonstrate using U.K. mortality data that the general procedure produces a relatively parsimonious model that nevertheless has a good fit to the data. 相似文献
17.
高龄人口死亡率预测模型是人口预测、养老金成本和债务评估以及长寿风险度量与管理的基础。我国大陆地区高龄人口死亡数据量少、数据波动性大,如何选择适合我国高龄数据特点的死亡率预测模型,是重要的研究课题。本文在归纳总结死亡率预测模型研究进展的基础上,先采用数据较为充分的台湾地区高龄死亡数据,选用Lee-Carter、CBD、贝叶斯分层模型等八种死亡率模型,对模型的拟合效果、预测效果和稳健性做出比较。在此基础上,基于修正和平滑后的我国大陆人口死亡数据,采用CBD模型和贝叶斯分层模型建模和预测。结果显示:贝叶斯分层模型能捕捉我国大陆高龄死亡率数据的历史波动,预测区间能够涵盖全部死亡率的真实值,但预测区间过宽,生存曲线不收敛;相比之下,CBD模型对我国大陆地区高龄死亡率的拟合和预测较好,预测区间和生存曲线合理。在长寿风险度量中,建议采用CBD模型。 相似文献
18.
Normalized exponential tilting is an extension of classical theories, including the Capital Asset Pricing Model (CAPM) and the Black–Merton–Scholes model, to price risks with general‐shaped distributions. The need for changing multivariate probability measures arises in pricing contingent claims on multiple underlying assets or liabilities. In this article, we apply it to valuation of mortality‐based securities written on mortality indices of several countries. We show how to use multivariate exponential tilting to price the first pure mortality security, the Swiss Re bond. The same technique can be applied in other mortality securitization pricing. 相似文献
19.
Although the Lee-Carter model has become a benchmark in modeling mortality rates, forecasting mortality risk, and hedging longevity risk, some serious issues exist on its inference and interpretation in the actuarial science literature. After pointing out these pitfalls, this article proposes a modified Lee-Carter model, provides a rigorous statistical inference, and derives the asymptotic distributions of the proposed estimators and unit root test when the mortality index is nearly integrated and errors in the model satisfy some mixing conditions. After a unit root hypothesis is not rejected, future mortality forecasts can be obtained via the proposed inference. An application of the proposed unit root test to U.S. mortality rates rejects the unit root hypothesis for the female and combined mortality rates but does not reject the unit root hypothesis for the male mortality rates. 相似文献
20.
Kevin Dowd PhD Andrew J. G. Cairns PhD David Blake PhD Guy D. Coughlan PhD Marwa Khalaf-Allah PhD 《North American actuarial journal : NAAJ》2013,17(2):334-356
Abstract The mortality rate dynamics between two related but different-sized populations are modeled consistently using a new stochastic mortality model that we call the gravity model. The larger population is modeled independently, and the smaller population is modeled in terms of spreads (or deviations) relative to the evolution of the former, but the spreads in the period and cohort effects between the larger and smaller populations depend on gravity or spread reversion parameters for the two effects. The larger the two gravity parameters, the more strongly the smaller population’s mortality rates move in line with those of the larger population in the long run. This is important where it is believed that the mortality rates between related populations should not diverge over time on grounds of biological reasonableness. The model is illustrated using an extension of the Age-Period-Cohort model and mortality rate data for English and Welsh males representing a large population and the Continuous Mortality Investigation assured male lives representing a smaller related population. 相似文献