首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 655 毫秒
1.
Two‐state models (working/failed or alive/dead) are widely used in reliability and survival analysis. In contrast, multi‐state stochastic processes provide a richer framework for modeling and analyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multi‐state models, focusing on time‐homogeneous semi‐Markov processes (SMPs), and then describe the statistical flowgraph framework, which comprises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral transforms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several applications.  相似文献   

2.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

3.
This survey reviews the existing literature on the most relevant Bayesian inference methods for univariate and multivariate GARCH models. The advantages and drawbacks of each procedure are outlined as well as the advantages of the Bayesian approach versus classical procedures. The paper makes emphasis on recent Bayesian non‐parametric approaches for GARCH models that avoid imposing arbitrary parametric distributional assumptions. These novel approaches implicitly assume infinite mixture of Gaussian distributions on the standardized returns which have been shown to be more flexible and describe better the uncertainty about future volatilities. Finally, the survey presents an illustration using real data to show the flexibility and usefulness of the non‐parametric approach.  相似文献   

4.
We propose a non‐parametric test to compare two correlated diagnostic tests for a three‐category classification problem. Our development was motivated by a proteomic study where the objectives are to detect glycan biomarkers for liver cancer and to compare the discrimination ability of various markers. Three distinct disease categories need to be identified from this analysis. We therefore chose to use three‐dimensional receiver operating characteristic (ROC) surfaces and volumes under the ROC surfaces to describe the overall accuracy for different biomarkers. Each marker in this study might include a cluster of similar individual markers and thus was considered as a hierarchically structured sample. Our proposed statistical test incorporated the within‐marker correlation as well as the between‐marker correlation. We derived asymptotic distributions for three‐dimensional ROC surfaces and subsequently implemented bootstrap methods to facilitate the inferences. Simulation and real‐data analysis were included to illustrate our methods. Our distribution‐free test may be simplified for paired and independent two‐sample comparisons as well. Previously, only parametric tests were known for clustered and correlated three‐category ROC analyses.  相似文献   

5.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

6.
In spite of the current availability of numerous methods of cluster analysis, evaluating a clustering configuration is questionable without the definition of a true population structure, representing the ideal partition that clustering methods should try to approximate. A precise statistical notion of cluster, unshared by most of the mainstream methods, is provided by the density‐based approach, which assumes that clusters are associated to some specific features of the probability distribution underlying the data. The non‐parametric formulation of this approach, known as modal clustering, draws a correspondence between the groups and the modes of the density function. An appealing implication is that the ill‐specified task of cluster detection can be regarded to as a more circumscribed problem of estimation, and the number of clusters is also conceptually well defined. In this work, modal clustering is critically reviewed from both conceptual and operational standpoints. The main directions of current research are outlined as well as some challenges and directions of further research.  相似文献   

7.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

8.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

9.
We propose a measure of predictability based on the ratio of the expected loss of a short‐run forecast to the expected loss of a long‐run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non‐parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
Panel count data arise in many applications when the event history of a recurrent event process is only examined at a sequence of discrete time points. In spite of the recent methodological developments, the availability of their software implementations has been rather limited. Focusing on a practical setting where the effects of some time‐independent covariates on the recurrent events are of primary interest, we review semiparametric regression modelling approaches for panel count data that have been implemented in R package spef . The methods are grouped into two categories depending on whether the examination times are associated with the recurrent event process after conditioning on covariates. The reviewed methods are illustrated with a subset of the data from a skin cancer clinical trial.  相似文献   

11.
The present paper introduces a methodology for the semiparametric or non‐parametric two‐sample equivalence problem when the effects are specified by statistical functionals. The mean relative risk functional of two populations is given by the average of the time‐dependent risk. This functional is a meaningful non‐parametric quantity, which is invariant under strictly monotone transformations of the data. In the case of proportional hazard models, the functional determines just the proportional hazard risk factor. It is shown that an equivalence test of the type of the two‐sample Savage rank test is appropriate for this functional. Under proportional hazards, this test can be carried out as an exact level α test. It also works quite well under other semiparametric models. Similar results are presented for a Wilcoxon rank‐sum test for equivalence based on the Mann–Whitney functional given by the relative treatment effect.  相似文献   

12.
Under a quantile restriction, randomly censored regression models can be written in terms of conditional moment inequalities. We study the identified features of these moment inequalities with respect to the regression parameters where we allow for covariate dependent censoring, endogenous censoring and endogenous regressors. These inequalities restrict the parameters to a set. We show regular point identification can be achieved under a set of interpretable sufficient conditions. We then provide a simple way to convert conditional moment inequalities into unconditional ones while preserving the informational content. Our method obviates the need for nonparametric estimation, which would require the selection of smoothing parameters and trimming procedures. Without the point identification conditions, our objective function can be used to do inference on the partially identified parameter. Maintaining the point identification conditions, we propose a quantile minimum distance estimator which converges at the parametric rate to the parameter vector of interest, and has an asymptotically normal distribution. A small scale simulation study and an application using drug relapse data demonstrate satisfactory finite sample performance.  相似文献   

13.
The purpose of the paper is to compare results of estimation and inference concerning covariate effects as obtained from two approaches to the analysis of survival data with multiple causes of failure. The first approach involves a dynamic model for the cause-specific hazard rate. The second is based on a static logistic regression model for the conditional probability of having had an event of interest. The influence of sociodemographic characteristics on the rate of family initiation and, more importantly, on the choice between marriage and cohabitation as a first union, is examined. We found that results, generally, are similar across the methods considered. Some issues in relation to censoring mechanisms and independence among causes of failure are discussed.  相似文献   

14.
This paper considers the location‐scale quantile autoregression in which the location and scale parameters are subject to regime shifts. The regime changes in lower and upper tails are determined by the outcome of a latent, discrete‐state Markov process. The new method provides direct inference and estimate for different parts of a non‐stationary time series distribution. Bayesian inference for switching regimes within a quantile, via a three‐parameter asymmetric Laplace distribution, is adapted and designed for parameter estimation. Using the Bayesian output, the marginal likelihood is readily available for testing the presence and the number of regimes. The simulation study shows that the predictability of regimes and conditional quantiles by using asymmetric Laplace distribution as the likelihood is fairly comparable with the true model distributions. However, ignoring that autoregressive coefficients might be quantile dependent leads to substantial bias in both regime inference and quantile prediction. The potential of this new approach is illustrated in the empirical applications to the US inflation and real exchange rates for asymmetric dynamics and the S&P 500 index returns of different frequencies for financial market risk assessment.  相似文献   

15.
Classification is a multivariate technique that is concerned with allocating new observations to two or more groups. We use interpoint distances to measure the closeness of the samples and construct new rules for high dimensional classification of discrete observations. Applicable to high dimensional data, the new method is non‐parametric and uses test‐based classification with permutation testing. We propose a modification of a test‐based rule to use relative values with respect to the training samples baseline. We compare the proposed rule with parametric methods, such as likelihood ratio rule and modified linear discriminate function, and non‐parametric techniques such as support vector machine, nearest neighbour and depth‐based classification, under multivariate Bernoulli, multinomial and multivariate Poisson distributions.  相似文献   

16.
In their advocacy of the rank‐transformation (RT) technique for analysis of data from factorial designs, Mende? and Yi?it (Statistica Neerlandica, 67, 2013, 1–26) missed important analytical studies identifying the statistical shortcomings of the RT technique, the recommendation that the RT technique not be used, and important advances that have been made for properly analyzing data in a non‐parametric setting. Applied data analysts are at risk of being misled by Mende? and Yi?it, when statistically sound techniques are available for the proper non‐parametric analysis of data from factorial designs. The appropriate methods express hypotheses in terms of normalized distribution functions, and the test statistics account for variance heterogeneity.  相似文献   

17.
We consider a utility‐consistent static labor supply model with flexible preferences and a nonlinear and possibly non‐convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

19.
We consider efficient estimation in moment conditions models with non‐monotonically missing‐at‐random (MAR) variables. A version of MAR point‐identifies the parameters of interest and gives a closed‐form efficient influence function that can be used directly to obtain efficient semi‐parametric generalized method of moments (GMM) estimators under standard regularity conditions. A small‐scale Monte Carlo experiment with MAR instrumental variables demonstrates that the asymptotic superiority of these estimators over the standard methods carries over to finite samples. An illustrative empirical study of the relationship between a child's years of schooling and number of siblings indicates that these GMM estimators can generate results with substantive differences from standard methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We consider estimating binary response models on an unbalanced panel, where the outcome of the dependent variable may be missing due to nonrandom selection, or there is self‐selection into a treatment. In the present paper, we first consider estimation of sample selection models and treatment effects using a fully parametric approach, where the error distribution is assumed to be normal in both primary and selection equations. Arbitrary time dependence in errors is permitted. Estimation of both coefficients and partial effects, as well as tests for selection bias, are discussed. Furthermore, we consider a semiparametric estimator of binary response panel data models with sample selection that is robust to a variety of error distributions. The estimator employs a control function approach to account for endogenous selection and permits consistent estimation of scaled coefficients and relative effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号