首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reversed hazard rates are found to be very useful in survival analysis and reliability especially in study on parallel systems and in the analysis of left censored lifetime data. In this paper, we derive a class of bivariate distributions having marginal proportional reversed hazard rates. We, then, introduce a class of proportional reversed hazard rates frailty models and propose a multivariate correlated gamma frailty model. Bivariate reversed hazard rates and association measure are discussed in terms of frailty parameters.  相似文献   

2.
Survival models allowing for random effects (e.g., frailty models) have been widely used for analyzing clustered time-to-event data. Accelerated failure time (AFT) models with random effects are useful alternatives to frailty models. Because survival times are directly modeled, interpretation of the fixed and random effects is straightforward. Moreover, the fixed effect estimates are robust against various violations of the assumed model. In this paper, we propose a penalized h-likelihood (HL) procedure for variable selection of fixed effects in the AFT random-effect models. For the purpose of variable selection, we consider three penalty functions, namely, least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and HL. We demonstrate via simulation studies that the proposed variable selection procedure is robust against the misspecification of the assumed model. The proposed method is illustrated using data from a bladder cancer clinical trial.  相似文献   

3.
Extensions of the Cox proportional hazards model for survival data are studied where allowance is made for unobserved heterogeneity and for correlation between the life times of several individuals. The extended models are frailty models inspired by Y ashin et al. (1995). Estimation is carried out using the EM algorithm. Inference is discussed and potential applications are outlined, in particular to statistical research in human genetics using twin data or adoption data, aimed at separating the effects of genetic and environmental factors on mortality.  相似文献   

4.
The aim of this paper is to derive methodology for designing ‘time to event’ type experiments. In comparison to estimation, design aspects of ‘time to event’ experiments have received relatively little attention. We show that gains in efficiency of estimators of parameters and use of experimental material can be made using optimal design theory. The types of models considered include classical failure data and accelerated testing situations, and frailty models, each involving covariates which influence the outcome. The objective is to construct an optimal design based of the values of the covariates and associated model or indeed a candidate set of models. We consider D-optimality and create compound optimality criteria to derive optimal designs for multi-objective situations which, for example, focus on the number of failures as well as the estimation of parameters. The approach is motivated and demonstrated using common failure/survival models, for example, the Weibull distribution, product assessment and frailty models.  相似文献   

5.
In hazard models, it is assumed that all heterogeneity is captured by a set of theoretically relevant covariates. In many applications however, there are ample reasons for unobserved heterogeneity due to omitted or unmeasured factors. If there is unmeasured frailty, the hazard will not only be a function of the covariates but also of the unmeasured frailty. This paper discusses the implications of unobserved heterogeneity on parameter estimates with application to the analysis of infant death on subsequent birth timing in Ghana and Kenya using DHS data. Using Lognormal Accelerated Failure Time models with and without frailty, we found that standard models that do not control for unobserved heterogeneity produced biased estimates by overstating the degree of positive dependence and underestimating the degree of negative dependence. The implications of the findings are discussed.  相似文献   

6.
In this article a novel approach to analyze clustered survival data that are subject to extravariation encountered through clustering of survival times is proposed. This is accomplished by extending the Cox proportional hazard model to a frailty model where the cluster-specific shared frailty is modeled nonparametrically. We assume a nonparametric Dirichlet process for the distribution of frailty. In such a semiparametric setup, we propose a hybrid method to draw model-based inferences. In the framework of the proposed hybrid method, the estimation of parameters is performed by implementing Monte Carlo expected conditional maximization algorithm. A simulation study is conducted to study the efficiency of our methodology. The proposed methodology is, thereafter, illustrated by a real-life data on recurrence time to infections in kidney patient.  相似文献   

7.
Information on disease history and comorbidity of patients can often be of great value to predict survival, for example in cancer research. In this paper a model is presented that accommodates such information by combining relative survival and frailty. Relative survival is used to model the excess risk of dying from recent concurrent diseases. Individual frailty allows estimation of a 'selection effect', which occurs if patients who have survived much hazard in the past are tougher and therefore tend to live longer than those who have survived less. Results are shown to be independent of the chosen family of frailty distributions if heterogeneity is small and to lead to a simple proportional excess hazards model. The model is applied to data from the Leiden University Medical Center on patients with head/neck tumors using information on previous tumors.  相似文献   

8.
Using microdata on 30,000 childbirths in India and dynamic panel data models, we analyse causal effects of birth-spacing on subsequent neonatal mortality and of mortality on subsequent birth intervals, controlling for unobserved heterogeneity. Right censoring is accounted for by jointly estimating a fertility equation, identified by using data on sterilization. We find evidence of frailty, fecundity, and causal effects in both directions. Birth intervals explain only a limited share of the correlation between neonatal mortality of successive children in a family. We predict that for every neonatal death, 0.37 additional children are born, of whom 0.30 survive.  相似文献   

9.
Recurrent events with a dependent terminal event arise frequently in a wide variety of fields. In this paper, we propose a new joint model to analyze these data and model the dependence between recurrent and terminal events through shared gamma frailty. Specifically, a Cox–Aalen rate frailty model is specified for the recurrent event, and an additive hazards frailty model is specified for the terminal event. An estimating equation approach is developed for the parameters in the joint model, and the asymptotic properties of the proposed estimators are established. Simulation studies demonstrate that the proposed estimators perform well with finite samples. An application to a medical cost study of chronic heart failure patients is illustrated.  相似文献   

10.
One of the statistical methods deployed in medical sciences to investigate time to event data is the survival analysis. This study, comparing efficiency of some parametric and semiparametric survival models, aims at investigating the effect of demographic and socio-economic factors on the growth failure of children below 2 years of age in Iran. The survival models including exponential, Weibull, log-logistic and log-normal models were compared to proportional hazards and extended Cox models by Akaike Information Criterion and variability of the estimated parameters. Based on the results, the log-normal model is recommended for analyzing the growth failure data of children in Iran. Furthermore, it is suggested that female children, children born to illiterate mothers and children born in larger households receive more attention in terms of growth failure.  相似文献   

11.
The use of joint modelling approaches is becoming increasingly popular when an association exists between survival and longitudinal processes. Widely recognized for their gain in efficiency, joint models also offer a reduction in bias compared with naïve methods. With the increasing popularity comes a constantly expanding literature on joint modelling approaches. The aim of this paper is to give an overview of recent literature relating to joint models, in particular those that focus on the time‐to‐event survival process. A discussion is provided on the range of survival submodels that have been implemented in a joint modelling framework. A particular focus is given to the recent advancements in software used to build these models. Illustrated through the use of two different real‐life data examples that focus on the survival of end‐stage renal disease patients, the use of the JM and joineR packages within R are demonstrated. The possible future direction for this field of research is also discussed.  相似文献   

12.
Multivariate frailty approaches are most commonly used to define distributions of random vectors, which represent lifetimes of individuals or components and stochastically compare them in terms of various multivariate orders. In this paper, we study a multivariate shared reversed frailty model and a general multivariate reversed frailty mixture model, and derive sufficient conditions for some of the stochastic orderings to hold among the random vectors. We also consider a particular case of a general multivariate mixture model in which the baseline distribution function is represented in terms of a copula and study stochastic comparisons (stochastic and lower orthant order) among the two random vectors.  相似文献   

13.
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster‐specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow‐up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster‐specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow‐up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within‐cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).  相似文献   

14.
The survival pattern of Swedish commercial banks during the period 1830--1990 is studied by parametric and non-parametric event-history methods. In particular we study the sensitivity of the conclusions reached with respect to the model used. It is found that the hazard is inversely U-shaped, which means that models that cannot allow for this type of hazard run into difficulties. Thus two of the most popular approaches in the analysis of event history data, the Gompertz and the Weibull models produce misleading results regarding the development of the death risk of banks over time. As regards the effect of explanatory variables on survival, on the other hand, most models are found to be robust and even in cases of misspecified baseline hazards, the estimated effects of the explanatory variables do not seem to be seriously wrong.  相似文献   

15.
We propose a novel time series panel data framework for estimating and forecasting time-varying corporate default rates subject to observed and unobserved risk factors. In an empirical application for a U.S. dataset, we find a large and significant role for a dynamic frailty component even after controlling for more than 80% of the variation in more than 100 macro-financial covariates and other standard risk factors. We emphasize the need for a latent component to prevent a downward bias in estimated default rate volatility and in estimated probabilities of extreme default losses on portfolios of U.S. debt. The latent factor does not substitute for a single omitted macroeconomic variable. We argue that it captures different omitted effects at different times. We also provide empirical evidence that default and business cycle conditions partly depend on different processes. In an out-of-sample forecasting study for point-in-time default probabilities, we obtain mean absolute error reductions of more than forty percent when compared to models with observed risk factors only. The forecasts are relatively more accurate when default conditions diverge from aggregate macroeconomic conditions.  相似文献   

16.
We present discrete time survival models of borrower default for credit cards that include behavioural data about credit card holders and macroeconomic conditions across the credit card lifetime. We find that dynamic models which include these behavioural and macroeconomic variables provide statistically significant improvements in model fit, which translate into better forecasts of default at both account and portfolio levels when applied to an out-of-sample data set. By simulating extreme economic conditions, we show how these models can be used to stress test credit card portfolios.  相似文献   

17.
We introduce longitudinal factor analysis (LFA) to extract the common risk‐free (CRF) rate from a sample of sovereign bonds of countries in a monetary union. Since LFA exploits the typically very large longitudinal dimension of bond data, it performs better than traditional factor analysis methods that rely on the much smaller cross‐sectional dimension. European sovereign bond yields for the period 2006–2011 are decomposed into a CRF rate, a default risk premium and a liquidity risk premium. Our empirical findings suggest that investors chase both credit quality and liquidity, and that they price double default risk on credit default swaps. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Jing Pan  Yuan Yu  Yong Zhou 《Metrika》2018,81(7):821-847
With the explosion of digital information, high-dimensional data is frequently collected in prevalent domains, in which the dimension of covariates can be much larger than the sample size. Many effective methods have been developed to reduce the dimension of such data recently, however, few methods might perform well for survival data with censoring. In this article, we develop a novel nonparametric feature screening procedure based on ultrahigh-dimensional survival data by incorporating the inverse probability weighting scheme to tackle the issue of censoring. The proposed method is model-free and hence can be implemented for extensive survival models. Moreover, it is robust to heterogeneity and invariant to monotone increasing transformations of the response. The sure screening property and ranking consistency property are also established under mild conditions. The competence and robustness of our method is further confirmed through comprehensive simulation studies and an analysis of a real data example.  相似文献   

19.
Despite their long history, parametric survival-time models have largely been neglected in the modern biostatistical and medical literature in favour of the Cox proportional hazards model. Here, I present a case for the use of the lognormal distribution in the analysis of survival times of breast and ovarian cancer patients, specifically in modelling the effects of prognostic factors. The lognormal provides a completely specified probability distribution for the observations and a sensible estimate of the variation explained by the model, a quantity that is controversial for the Cox model. I show how imputation of censored observations under the model may be used to inspect the data using familiar graphical and other technques. Results from the Cox and lognormal models are compared and shown apparently to differ to some extent. However, it is hard to judge which model gives the more accurate estimates. It is concluded that provided the lognormal model fits the data adequately, it may be a useful approach to the analysis of censored survival data.  相似文献   

20.
This Monte Carlo study examines the relative performance of sample selection and two-part models for data with a cluster at zero. The data are drawn from a bivariate normal distribution with a positive correlation. The alternative estimators are examined in terms of means squared error, mean bias and pointwise bias. The sample selection estimators include LIML and FIML. The two-part estimators include a naive (the true specification, omitting the correlation coefficient) and a data-analytic (testimator) variant.In the absence of exclusion restrictions, the two-part models are no worse, and often appreciably better than selection models in terms of mean behavior, but can behave poorly for extreme values of the independent variable. LIML had the worst performance of all four models. Empirically, selection effects are difficult to distinguish from a non-linear (e.g., quadratic) response. With exclusion restrictions, simple selection models were significantly better behaved than a naive two-part model over subranges of the data, but were negligibly better than the data-analytic version.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号