首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Cardiovascular Health Study (CHS) analyzes risk factors for coronary heart disease and stroke in people age 65 and older. Since CHS is designed to comprehensively study cardiovascular risk factors in an elderly population, it provides a unique opportunity to study the association of risk factors with mortality, as well as morbidity risk. With the growth of the elderly as population and life insurance market segments, the need to more precisely stratify mortality within a standard risk group of the elderly has grown as well. This exploratory analysis assesses medical factors that could be used to improve mortality risk stratification within a "standard" mortality population, using the CHS public use data set. Participants with a personal history of cardiovascular disease, diabetes, or major electrocardiographic abnormalities were excluded from the analysis in order to mimic a standard life insurance selection process. Then, Cox proportional hazards regression was used to study 10 medical risk factors. This model suggested that forced vital capacity >80% predicted, serum creatinine <1.5 mg/dL (133 mcmol/L), hemoglobin >11 g/dL (110 g/L), and serum albumin >3.5 mg/L (35 mmol/ L) are significantly associated (p = 0.05) with favorable mortality. C-reactive protein <1 mg/L is associated with favorable mortality at borderline significance levels (p = 0.09). On the other hand, a family history of cardiovascular disease (MI and/or stroke) and low BMI (<26 kg/m2) are associated with unfavorable mortality in the analysis. Total to HDL cholesterol ratio of <6, presence of supine systolic blood pressure < or = 140 mmHg, and the presence of minor rest electrocardiographic findings were not statistically significant factors in the multivariate model. Further assessment of the predictive value of the "significant" medical factors identified is required in insured lives.  相似文献   

2.
Predictive models of health care costs have become mainstream in much health care actuarial work. The Affordable Care Act requires the use of predictive modeling-based risk-adjuster models to transfer revenue between different health exchange participants. Although the predictive accuracy of these models has been investigated in a number of studies, the accuracy and use of models for applications other than risk adjustment have not been the subject of much investigation. We investigate predictive modeling of future health care costs using several statistical techniques. Our analysis was performed based on a dataset of 30,000 insureds containing claims information from two contiguous years. The dataset contains more than 100 covariates for each insured, including detailed breakdown of past costs and causes encoded via coexisting condition flags. We discuss statistical models for the relationship between next-year costs and medical and cost information to predict the mean and quantiles of future cost, ranking risks and identifying most predictive covariates. A comparison of multiple models is presented, including (in addition to the traditional linear regression model underlying risk adjusters) Lasso GLM, multivariate adaptive regression splines, random forests, decision trees, and boosted trees. A detailed performance analysis shows that the traditional regression approach does not perform well and that more accurate models are possible.  相似文献   

3.
The CAPM as the benchmark asset pricing model generally performs poorly in both developed and emerging markets. We investigate whether allowing the model parameters to vary improves the performance of the CAPM and the Fama–French model. Conditional asset pricing models scaled by conditioning variables such as Trading Volume and Dividend Yield generally result in small pricing errors. However, a graphical analysis reveals that the predictions of conditional models are generally upward biased. We demonstrate that the bias in prediction may be the consequence of ignoring frequent large variation in asset returns caused by volatile institutional, political and macroeconomic conditions. This is characterised by excess kurtosis. An unconditional Fama–French model augmented with a cubic market factor performs the best among some competing models when local risk factors are employed. Moreover, the conditional models with global risk factors scaled by global conditioning variables perform better than the unconditional models with global risk factors.  相似文献   

4.
ABSTRACT

We present a model for post-retirement mortality where differentials automatically reduce with increasing age, but without the fitted mortality rates for subgroups crossing over. Selection effects are catered for, as are age-modulated time trends and seasonal variation in mortality. Central to the model are Hermite splines, which permit parsimonious modelling of complex risk factors in even modest-sized portfolios. The model is therefore suitable for the stand-alone analysis of experience data for reinsurance, bulk annuities and longevity swaps. We also illustrate the contrast between the statistical significance of a risk factor and its financial significance and discuss reasons why one might include risk factors like season that are not directly financially significant.  相似文献   

5.
Abstract

The use of clinical literature to set risk classification standards for life insurance underwriting stems from the need to set the most accurate standards using the best available information. A necessary hurdle in this process is converting any excess mortality observed in a clinical study to the appropriate rating for use in underwriting. A widely accepted model in the insurance industry, the Excess Death Rate model, treats the excess as additive to the conditional probability of death for an insurance company’s unimpaired class.

In this paper we test the validity of that model versus other common predictive models of excess mortality in an insured population. Applying these models to National Health and Nutrition Examination Survey (NHANES) data, we derive estimates for excess mortality from three commonly seen underwriting impairments in what could be considered a clinical population. These estimates are added to an estimate of an insurance company’s unimpaired mortality class and then used to predict deaths in an “insurable” subset of that clinical population.

The Excess Death Rate model performed the best of all models, having the smallest cumulative difference of actual to predicted deaths. The use of publicly available data, such as that in NHANES, could help bridge the gap between clinical literature and its application in insurance underwriting if insurable cohorts can be reliably identified from these generally healthy, ambulatory groups.  相似文献   

6.
Comparing early warning systems for banking crises   总被引:1,自引:0,他引:1  
Despite the extensive literature on prediction of banking crises by Early Warning Systems (EWSs), their practical use by policy makers is limited, even in the international financial institutions. This is a paradox since the changing nature of banking risks as more economies liberalise and develop their financial systems, as well as ongoing innovation, makes the use of EWS for informing policies aimed at preventing crises more necessary than ever. In this context, we assess the logit and signal extraction EWS for banking crises on a comprehensive common dataset. We suggest that logit is the most appropriate approach for global EWS and signal extraction for country-specific EWS. Furthermore, it is important to consider the policy maker's objectives when designing predictive models and setting related thresholds since there is a sharp trade-off between correctly calling crises and false alarms.  相似文献   

7.
Vasicek and Fong 11 developed exponential spline functions as models of the interest rate term structure and claim such models are superior to polynomial spline models. It is found empirically that i) exponential spline term structure estimates are no more stable than estimates from a polynomial spline model, ii) data transformations implicit in the exponential spline model frequently condition the data so that it is difficult to obtain approximations in which one can place confidence, and iii) the asymptotic properties of the exponential spline model frequently are unrealistic. Estimation with exponential splines is no more convenient than estimation with polynomial splines and gives substantially identical estimates of the interest rate term structure as well.  相似文献   

8.
Most extrapolative stochastic mortality models are constructed in a similar manner. Specifically, when they are fitted to historical data, one or more series of time-varying parameters are identified. By extrapolating these parameters to the future, we can obtain a forecast of death probabilities and consequently cash flows arising from life contingent liabilities. In this article, we first argue that, among various time-varying model parameters, those encompassed in the Cairns-Blake-Dowd (CBD) model (also known as Model M5) are most suitably used as indexes to indicate levels of longevity risk at different time points. We then investigate how these indexes can be jointly modeled with a more general class of multivariate time-series models, instead of a simple random walk that takes no account of cross-correlations. Finally, we study the joint prediction region for the mortality indexes. Such a region, as we demonstrate, can serve as a graphical longevity risk metric, allowing practitioners to compare the longevity risk exposures of different portfolios readily.  相似文献   

9.
Predicting financial networks has important implications in finance. However, less research attention has been given in this direction. This study aims to predict cross market linkage strengths in financial networks using dynamic edge weight prediction. The study builds edge weight prediction models using deep learning based unidirectional and bidirectional long short-term memory (LSTM and BiLSTM) approaches. The models are built on temporally variant equity cross market networks in Asia. A rolling window approach is used to generate temporally synchronous observations of edge weight structures and subsequent networks. The models are trained with a series of historical network structure information. Tuning of hyperparameters was performed to obtain the optimized models. The models were validated using nested cross validation methods. The applicability of the optimized models was assessed for different scenarios, and promising results were found for edge weight predictions on both unfiltered and filtered structures. The study also shows that the predictive performance of the BiLSTM model is the same across correlated (positively weighted) and anti-correlated (negatively weighted) edges. However, for LSTM the same does not hold true. The results also demonstrate that the proposed models predict edge weights better during normal conditions than in the crisis period. The proposed models are benchmarked against ARIMA and RNN. To our knowledge, this is the first attempt to build network prediction models for cross market equity networks. The findings can have key implications such as managing international portfolio diversifications and controlling systemic risk transmissions.  相似文献   

10.
This paper examines the impact of Kalman filtering as a technique for modeling the risk levels of managed funds. Using a sample of Australian Multi-sector trusts we examine selectivity and market timing performance using conventional performance models alongside Kalman filter models that allow beta to vary via a random walk. Further, we consider the stability and asymmetry of these performance measures together with a measure of volatility timing arising from a cubic model of fund performance. We find that the positive selectivity (negative market timing) that stems from the conventional models is not present with the Kalman filter model. The Kalman filter model tends to show neutral performance for both. However, both models confirm a strong tendency toward negative volatility timing.  相似文献   

11.
Population studies have consistently reported the increased risk of coronary heart disease mortality and sudden death in subjects with resting electrocardiogram evidence of unambiguous ST depression or T wave abnormalities. However, more subtle variations in normal electrocardiographic findings may also provide predictive and prognostic information. This case study illustrates the potential risk selection implications of such changes.  相似文献   

12.
Although copious statistical failure prediction models are described in the literature, appropriate tests of whether such methodologies really work in practice are lacking. Validation exercises typically use small samples of non‐failed firms and are not true tests of ex ante predictive ability, the key issue of relevance to model users. This paper provides the operating characteristics of the well‐known Taffler (1983) UK‐based z‐score model for the first time and evaluates its performance over the 25‐year period since it was originally developed. The model is shown to have clear predictive ability over this extended time period and dominates more naïve prediction approaches. This study also illustrates the economic value to a bank of using such methodologies for default risk assessment purposes. Prima facie, such results also demonstrate the predictive ability of the published accounting numbers and associated financial ratios used in the z‐score model calculation.  相似文献   

13.
Modeling obesity prevalence is an important part of the evaluation of mortality risk. A large volume of literature exists in the area of modeling mortality rates, but very few models have been developed for modeling obesity prevalence. In this study we propose a new stochastic approach for modeling obesity prevalence that accounts for both period and cohort effects as well as the curvilinear effects of age. Our model has good predictive power as we utilize multivariate ARIMA models for forecasting future obesity rates. The proposed methodology is illustrated on the U.S. population, aged 23–90, during the period 1988–2012. Forecasts are validated on actual data for the period 2013–2015 and it is suggested that the proposed model performs better than existing models.  相似文献   

14.
Obesity assessed by body mass index (BMI) is associated with increased mortality risk, but there is uncertainty about whether BMI is the best way to measure obesity. Waist circumference (WC) has been proposed as a better measure. The Swiss Re BMI/WC Study was conducted to determine whether BMI or WC is a better predictor of future all-cause mortality in a large male insurance population. Using Cox proportional hazard models, risk ratios for increasing BMI and WC were 1.033 (P < .001) and 1.027 (P < .001), respectively. Risk ratios for obesity defined by BMI > or = 30 kg/m2 and WC > or = 40 inches were 1.33 (P < .001) and 1.20 (P = .002), respectively. In this study, BMI and WC are essentially equivalent in their ability to predict mortality risk in a male insurance population. Obesity, measured by either BMI or WC, has important underwriting and pricing implications.  相似文献   

15.
Providers of life annuities and pensions need to consider both systematic mortality improvement trends and mortality heterogeneity. Although how mortality improvement varies with age and gender at the population level is well studied, how trends vary with risk factors remains relatively unexplored. This article assesses how systematic mortality improvement trends vary with individual risk characteristics using individual-level longitudinal data from the U.S. Health and Retirement Study between 1994 and 2009. Initially a Lee-Carter model is used to assess mortality improvement trends by grouping individuals with similar risk characteristics of gender, education, and race. We then fit a longitudinal mortality model to individual-level data allowing for heterogeneity and time trends in individual-level risk factors. Our results show how survey data can provide valuable insights into both mortality heterogeneity and improvement trends more effectively than commonly used aggregate models. We show how mortality improvement differs across individuals with different risk factors. Significantly, at an individual level, mortality improvement trends have been driven by changes in health history such as high blood pressure, cancer, and heart problems rather than risk factors such as education, marital status, body mass index, and smoker status.  相似文献   

16.
ABSTRACT

Modeling multivariate time-series aggregate losses is an important actuarial topic that is very challenging due to the fact that losses can be serially dependent with heterogeneous dependence structures across loss types and business lines. In this paper, we investigate a flexible class of multivariate Cox Hidden Markov Models for the joint arrival process of loss events. Some of the nice properties possessed by this class of models, such as closed-form expressions, thinning properties and model versatility are discussed in details. We provide the expectation-maximization (EM) algorithm for efficient model calibration. Applying the proposed model to an operational risk dataset, we demonstrate that the model offers sufficient flexibility to capture most characteristics of the observed loss frequencies. By modeling the log-transformed loss severities through mixture of Erlang distributions, we can model the aggregate losses. Finally, out-of-sample testing shows that the proposed model is adequate to predict short-term future operational risk losses.  相似文献   

17.
The accurate prediction of long-term care insurance (LTCI) mortality, lapse, and claim rates is essential when making informed pricing and risk management decisions. Unfortunately, academic literature on the subject is sparse and industry practice is limited by software and time constraints. In this article, we review current LTCI industry modeling methodology, which is typically Poisson regression with covariate banding/modification and stepwise variable selection. We test the claim that covariate banding improves predictive accuracy, examine the potential downfalls of stepwise selection, and contend that the assumptions required for Poisson regression are not appropriate for LTCI data. We propose several alternative models specifically tailored toward count responses with an excess of zeros and overdispersion. Using data from a large LTCI provider, we evaluate the predictive capacity of random forests and generalized linear and additive models with zero-inflated Poisson, negative binomial, and Tweedie errors. These alternatives are compared to previously developed Poisson regression models.

Our study confirms that variable modification is unnecessary at best and automatic stepwise model selection is dangerous. After demonstrating severe overprediction of LTCI mortality and lapse rates under the Poisson assumption, we show that a Tweedie GLM enables much more accurate predictions. Our Tweedie regression models improve average predictive accuracy (measured by several prediction error statistics) over Poisson regression models by as much as four times for mortality rates and 17 times for lapse rates.  相似文献   


18.
This article develops two models for predicting the default of Russian Small and Medium-sized Enterprises (SMEs). The most general questions that the article attempts to answer are ‘Can the default risk of Russian SMEs be assessed with a statistical model?’ and ‘Would it sufficiently demonstrate high predictive accuracy?’ The article uses a relatively large data set of financial statements and employs discriminant analysis as a statistical methodology. Default is defined as legal bankruptcy. The basic model contains only financial ratios; it is extended by adding size and age variables. Liquidity and profitability turned out to be the key factors in predicting default. The resulting models have high predictive accuracy and have the potential to be of practical use in Russian SME lending.  相似文献   

19.
Once a pricing kernel is established, bond prices and all other interest rate claims can be computed. Alternatively, the pricing kernel can be deduced from observed prices of bonds and selected interest rate claims. Examples of the former approach include the celebrated Cox, Ingersoll, and Ross (1985b) model and the more recent model of Constantinides (1992). Examples of the latter include the Black, Derman, and Toy (1990) model and the Heath, Jarrow, and Morton paradigm (1992) (hereafter HJM). In general, these latter models are not Markov. Fortunately, when suitable restrictions are imposed on the class of volatility structures of forward rates, then finite-state variable HJM models do emerge. This article provides a linkage between the finite-state variable HJM models, which use observables to induce a pricing kernel, and the alternative approach, which proceeds directly to price after a complete specification of a pricing kernel. Given such linkages, we are able to explicitly reveal the relationship between state-variable models, such as Cox, Ingersoll, and Ross, and the finite-state variable HJM models. In particular, our analysis identifies the unique map between the set of investor forecasts about future levels of the drift of the pricing kernel and the manner by which these forecasts are revised, to the shape of the term structure and its volatility. For an economy with square root innovations, the exact mapping is made transparent.  相似文献   

20.
The projection of mortality rates is an essential part of valuing liabilities in life insurance portfolios and pension schemes. An important tool for risk management and solvency purposes is a stochastic projection model. We show that ARIMA models can be better representations of mortality time-series than simple random-walk models. We also consider the issue of parameter risk in time-series models from the point of view of an insurer using them for regulatory risk reporting – formulae are given for decomposing overall risk into undiversifiable trend risk (parameter uncertainty) and diversifiable volatility. Particular attention is given to the contrasts in how academic researchers might view these models and how insurance regulators and practitioners in life offices might use them. Using a bootstrap method we find that, while certain kinds of parameter risk are negligible, others are too material to ignore. We also find that an objective model selection criterion, such as goodness of fit to past data, can result in the selection of a model with unstable parameter values. While this aspect of the model is superficially undesirable, it also leads to slightly higher capital requirements and thus makes the model of keen interest to regulators. Our conclusions have relevance to insurers using value-at-risk capital assessments in the European Union under Solvency II, but also territories using conditional tail expectations such as Australia, Canada and Switzerland.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号