首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

2.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

3.
In this paper we discuss the analysis of data from population‐based case‐control studies when there is appreciable non‐response. We develop a class of estimating equations that are relatively easy to implement. For some important special cases, we also provide efficient semi‐parametric maximum‐likelihood methods. We compare the methods in a simulation study based on data from the Women's Cardiovascular Health Study discussed in Arbogast et al. (Estimating incidence rates from population‐based case‐control studies in the presence of non‐respondents, Biometrical Journal 44, 227–239, 2002).  相似文献   

4.
We propose an optimal filter to transform the Conference Board Composite Leading Index (CLI) into recession probabilities in the US economy. We also analyse the CLI's accuracy at anticipating US output growth. We compare the predictive performance of linear, VAR extensions of smooth transition regression and switching regimes, probit, non‐parametric models and conclude that a combination of the switching regimes and non‐parametric forecasts is the best strategy at predicting both the NBER business cycle schedule and GDP growth. This confirms the usefulness of CLI, even in a real‐time analysis. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
A rich theory of production and analysis of productive efficiency has developed since the pioneering work by Tjalling C. Koopmans and Gerard Debreu. Michael J. Farrell published the first empirical study, and it appeared in a statistical journal (Journal of the Royal Statistical Society), even though the article provided no statistical theory. The literature in econometrics, management sciences, operations research and mathematical statistics has since been enriched by hundreds of papers trying to develop or implement new tools for analysing productivity and efficiency of firms. Both parametric and non‐parametric approaches have been proposed. The mathematical challenge is to derive estimators of production, cost, revenue or profit frontiers, which represent, in the case of production frontiers, the optimal loci of combinations of inputs (like labour, energy and capital) and outputs (the products or services produced by the firms). Optimality is defined in terms of various economic considerations. Then the efficiency of a particular unit is measured by its distance to the estimated frontier. The statistical problem can be viewed as the problem of estimating the support of a multivariate random variable, subject to some shape constraints, in multiple dimensions. These techniques are applied in thousands of papers in the economic and business literature. This ‘guided tour’ reviews the development of various non‐parametric approaches since the early work of Farrell. Remaining challenges and open issues in this challenging arena are also described. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

6.
This paper uses non‐parametric kernel methods to construct observation‐specific elasticities of substitution for a balanced panel of 73 developed and developing countries to examine the capital–skill complementarity hypothesis. The exercise shows some support for capital–skill complementarity, but the strength of the evidence depends upon the definition of skilled labour and the elasticity of substitution measure being used. The added flexibility of the non‐parametric procedure is also capable of uncover ing that the elasticities of substitution vary across countries, groups of countries and time periods.  相似文献   

7.
The human capital of a firm as manifested by employee knowledge and experience represents a key resource of a firm's capabilities. Prior empirical studies have found that firms composed of high levels of human capital experience superior firm performance. Human capital theory proposes that an individual's general or firm‐specific human capital is positively related to compensation. However, empirical studies examining firm‐specific human capital's association with higher employee compensation have been inconclusive. The current study proposes that firm‐specific human capital be categorized as task‐specific and non‐task‐specific. Employees accumulate task‐specific human capital through duties conducted in their current position. Non‐task‐specific human capital represents experiences gained in prior positions to an employee's current job within the firm. Utilizing human capital data from 38,390 employees representing 76 firms in the IT sector, this study examines the association between forms of human capital and employee compensation at different levels of firm productivity. Results show that task‐specific human capital is associated with higher employee compensation. In addition, firm productivity moderates this association.  相似文献   

8.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

9.
We study parametric and non‐parametric approaches for assessing the accuracy and coverage of a population census based on dual system surveys. The two parametric approaches being considered are post‐stratification and logistic regression, which have been or will be implemented for the US Census dual system surveys. We show that the parametric model‐based approaches are generally biased unless the model is correctly specified. We then study a local post‐stratification approach based on a non‐parametric kernel estimate of the Census enumeration functions. We illustrate that the non‐parametric approach avoids the risk of model mis‐specification and is consistent under relatively weak conditions. The performances of these estimators are evaluated numerically via simulation studies and an empirical analysis based on the 2000 US Census post‐enumeration survey data.  相似文献   

10.
For a vast class of discrete model families where the natural parameter is constrained to an interval, we give conditions for which the Bayes estimator with respect to a boundary supported prior is minimax under squared error loss type functions. Building on a general development of éric Marchand and Ahmad Parsian, applicable to squared error loss, we obtain extensions to various parametric functions and squared error loss type functions. We provide illustrations for various distributions and parametric functions, and these include examples for many common discrete distributions, as well as when the parametric function is a zero-count probability, an odds-ratio, a Binomial variance, and a Negative Binomial variance, among others. The Research of M. Jafari Jozani is supported by a grant of the Institute for Research and Planning in Higher Education, Ministry of Science, Research and Technology, Iran. The Research of é. Marchand is supported by NSERC of Canada.  相似文献   

11.
We provide a partial ordering view of horizontal inequity (HI), based on the Lorenz criterion, associated with different post‐tax income distributions and a (bistochastic) non‐parametric estimated benchmark distribution. As a consequence, several measures consistent with the Lorenz criterion can be rationalized. In addition, we establish the so‐called HI transfer principle, which imposes a normative minimum requirement that any HI measure must satisfy. Our proposed HI ordering is consistent with this principle. Moreover, we adopt a cardinal view to decompose the total effect of a tax system into a welfare gain caused by HI‐free income redistribution and a welfare loss caused by HI, without any additive decomposable restriction on the indices. Hence, more robust tests can be applied. Other decompositions in the literature are seen as particular cases.  相似文献   

12.
We consider a class of household production models characterized by a dichotomy property. In these models the amount of time spent on household production does not depend on the household utility function, conditional on household members having a paid job. We analyse the (non‐parametric) identifiability of the production function and the so‐called jointness function (a function describing which part of household production time is counted as pure leisure). It is shown that the models are identified in the two‐adult case, but not in the single‐adult case. We present an empirical application to Swedish time‐allocation data. The estimates satisfy regularity conditions that were violated in previous studies and pass various specification tests. For this data set we find that male and female home production time are q‐substitutes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
Two‐state models (working/failed or alive/dead) are widely used in reliability and survival analysis. In contrast, multi‐state stochastic processes provide a richer framework for modeling and analyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multi‐state models, focusing on time‐homogeneous semi‐Markov processes (SMPs), and then describe the statistical flowgraph framework, which comprises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral transforms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several applications.  相似文献   

14.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

15.
We propose a strategy for assessing structural stability in time‐series frameworks when potential change dates are unknown. Existing stability tests are effective in detecting structural change, but procedures for identifying timing are imprecise, especially in assessing the stability of variance parameters. We present a likelihood‐based procedure for assigning conditional probabilities to the occurrence of structural breaks at alternative dates. The procedure is effective in improving the precision with which inferences regarding timing can be made. We illustrate parametric and non‐parametric implementations of the procedure through Monte Carlo experiments, and an assessment of the volatility reduction in the growth rate of US GDP. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
Within the inferential context of predicting a distribution of potential outcomes P[y(t)] under a uniform treatment assignment tT, this paper deals with partial identification of the α‐quantile of the distribution of interest Qα[y(t)] under relatively weak and credible monotonicity‐type assumptions on the individual response functions and the population selection process. On the theoretical side, the paper adds to the existing results on non‐parametric bounds on quantiles with no prior information and under monotone treatment response (MTR) by introducing and studying the identifying properties of α‐quantile monotone treatment selection (α‐QMTS), α‐quantile monotone instrumental variables (α‐QMIV) and their combinations. The main result parallels that for the mean; MTR and α‐QMTS aid identification in a complementary fashion, so that combining them greatly increases identification power. The theoretical results are illustrated through an empirical application on the Italian returns to educational qualifications. Bounds on several quantiles of ln(wage) under different qualifications and on quantile treatments effects (QTE) are estimated and compared with parametric quantile regression (α‐QR) and α‐IVQR estimates from the same sample. Remarkably, the α‐QMTS & MTR upper bounds on the α‐QTE of a college degree versus elementary education imply smaller year‐by‐year returns than the corresponding α‐IVQR point estimates. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
This article is concerned with the inference on seemingly unrelated non‐parametric regression models with serially correlated errors. Based on an initial estimator of the mean functions, we first construct an efficient estimator of the autoregressive parameters of the errors. Then, by applying an undersmoothing technique, and taking both of the contemporaneous correlation among equations and serial correlation into account, we propose an efficient two‐stage local polynomial estimation for the unknown mean functions. It is shown that the resulting estimator has the same bias as those estimators which neglect the contemporaneous and/or serial correlation and smaller asymptotic variance. The asymptotic normality of the resulting estimator is also established. In addition, we develop a wild block bootstrap test for the goodness‐of‐fit of models. The finite sample performance of our procedures is investigated in a simulation study whose results come out very supportive, and a real data set is analysed to illustrate the usefulness of our procedures.  相似文献   

18.
We estimate the costs of distributing electricity using data on municipal electric utilities in Ontario, Canada for the period 1993–5. The data reveal substantial evidence of increasing returns to scale with minimum efficient scale being achieved by firms with about 20,000 customers. Larger firms exhibit constant or decreasing returns. Utilities which deliver additional services (such as water/sewage), have significantly lower costs, indicating the presence of economies of scope. Our basic specifications comprise semiparametric variants of the translog cost function where output enters non‐parametrically and remaining variables (including their interactions with output) are parametric. We rely upon non‐parametric differencing techniques and extend a previous differencing test of equality of non‐parametric regression functions to a panel data setting. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

19.
This paper shows how scale efficiency can be measured from an arbitrary parametric hyperbolic distance function with multiple outputs and multiple inputs. It extends the methods introduced by Ray (J Product Anal 11:183–194, 1998), and Balk (J Product Anal 15:159–183, 2001) and Ray (2003) that measure scale efficiency from a single-output multi-input distance function and from a multi-output and multi-input distance function, respectively. The method developed in the present paper is different from Ray’s and Balk’s in that it allows for simultaneous contraction of inputs and expansion of outputs. Theorems applicable to an arbitrary parametric hyperbolic distance function are introduced first, and then their uses in measuring scale efficiency are illustrated with the translog functional form.  相似文献   

20.
In consumer theory, the principles of Lancaster's characteristics approach and hedonic pricing appear to offer the most promising insight into choice when qualitative aspects are important. The paper reconciles these principles with the family of non‐parametric frontier estimation methods known as data envelopment analysis. It is shown that, with some straightforward adjustments, DEA is entirely consistent with the characteristics view of consumer choice found in the economics literature. In making Lancaster's ideas operational, the paper also addresses the theoretical concern voiced by Lancaster about combining indivisible products. The principles are illustrated with a case study involving the comparison of diesel cars. The paper concludes that the user will ultimately have to apply some judgement in choosing between competing efficient products. However, the analysis should help to restrict the number of products to be assessed to manageable proportions. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号