首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

2.
This paper presents both the history of and state‐of‐the‐art in empirical modeling approaches to the world commodity price volatility. The analysis builds on the storage model and key milestones in its development. Specifically, it is intended to offer a reader unfamiliar with the relevant literature an insight into the modeling issues at stake from both a historical and speculative viewpoint. The review considers primarily the empirical techniques designed to assess the merits of the storage theory; it does not address purely statistical approaches that do not rely on storage theory and that have been studied in depth in other streams of the commodity price literature. The paper concludes with some suggestions for future research to try to resolve some of the existing empirical flaws, and hopefully to increase the explanatory power of the storage model.  相似文献   

3.
In their advocacy of the rank‐transformation (RT) technique for analysis of data from factorial designs, Mende? and Yi?it (Statistica Neerlandica, 67, 2013, 1–26) missed important analytical studies identifying the statistical shortcomings of the RT technique, the recommendation that the RT technique not be used, and important advances that have been made for properly analyzing data in a non‐parametric setting. Applied data analysts are at risk of being misled by Mende? and Yi?it, when statistically sound techniques are available for the proper non‐parametric analysis of data from factorial designs. The appropriate methods express hypotheses in terms of normalized distribution functions, and the test statistics account for variance heterogeneity.  相似文献   

4.
5.
6.
Forecasting labour market flows is important for budgeting and decision‐making in government departments and public administration. Macroeconomic forecasts are normally obtained from time series data. In this article, we follow another approach that uses individual‐level statistical analysis to predict the number of exits out of unemployment insurance claims. We present a comparative study of econometric, actuarial and statistical methodologies that base on different data structures. The results with records of the German unemployment insurance suggest that prediction based on individual‐level statistical duration analysis constitutes an interesting alternative to aggregate data‐based forecasting. In particular, forecasts of up to six months ahead are surprisingly precise and are found to be more precise than considered time series forecasts.  相似文献   

7.
8.
This is an expository paper on applications of statistics in the field of general insurance, also called non‐life insurance. Unlike life insurance where advanced statistical techniques have long been part of financial mathematics and actuarial applications, their use is only relatively recent in non‐life insurance. The business model of insurance companies, especially those active in non‐life insurance, has seen dramatic changes over the last 15 years. The aim of this paper is to convince the readers that especially today non‐life insurance is not only an exciting ground to apply existing modern statistical tools but also a fertile environment for new and challenging statistical developments. The activities of an insurance company can be viewed as an industrial process where data management and data analysis play a key role. That is why a fundamental understanding of data‐related issues (such as data quality, variability, analysis and correct interpretation) is so essential to the insurance business. These are exactly the tasks where professional statisticians excel. Also, a better understanding of the field of general insurance by statisticians will promote fruitful exchanges between actuaries and statisticians, thereby helping to bring actuarial and statistical professional societies closer to each other. Selected examples are used to cover the essential aspects of general insurance, and all of them are based on the author's experience. The paper concludes with some remarks on the role of statisticians working in general insurance.  相似文献   

9.
The prevailing lack of consensus about the comparative harms and benefits of cancer screening stems, in part, from the inappropriate calculations of the expected mortality impact of a sustained screening programme. There is an inherent, and often substantial, time lag from the time of screening until the resulting mortality reductions begin, reach their maximum and ultimately end. However, the cumulative mortality reduction reported in a randomised screening trial is typically calculated over an arbitrarily defined follow‐up period, including follow‐up time where the mortality impact is yet to realise or where it has already been exhausted. Because of this, the cumulative reduction cannot be used for projecting the mortality impact expected from a sustained screening programme. For this purpose, we propose a new measure, the time‐specific probability of being helped by screening, given that the cancer would have proven fatal otherwise. This can be decomposed into round‐specific impacts, which in turn can be parametrised and estimated from the trial data. This represents a major shift in quantifying the benefits due to a sustained screening programme, based on statistical evidence extracted from existing trial data. We illustrate our approach using data from screening trials in lung and colorectal cancers.  相似文献   

10.
Two‐state models (working/failed or alive/dead) are widely used in reliability and survival analysis. In contrast, multi‐state stochastic processes provide a richer framework for modeling and analyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multi‐state models, focusing on time‐homogeneous semi‐Markov processes (SMPs), and then describe the statistical flowgraph framework, which comprises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral transforms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several applications.  相似文献   

11.
This essay is concerned with the planning and densification of suburbs, which present a huge challenge insofar as they form a large area of urbanized land that remains underexploited due to low residential density. Drawing on current research in the Paris city‐region, the essay focuses specifically on the difficulty in implementing densification policies in low‐rise suburban areas. It examines the varying degrees of densification fostered by these policies, and builds upon recent urban studies literature on suburban change to trace how suburban areas are being transformed through regulations, instruments and market dynamics associated with densification processes. What kinds of densification policy are being implemented and what are the socio‐economic, political and cultural determinants of each type of regulatory approach? This essay will attempt to answer this question via an analysis of the densification policies being put in place in the municipalities of the Paris city‐region. It will offer in turn a typology of these different policies. It shows that densification is an instrument that can be used to address local political concerns which vary greatly depending on the economic, social and geographical position of municipalities within larger urban areas.  相似文献   

12.
In statistical analysis and operational research many techniques are being applied, which are based upon the fact that the phenomenon under consideration has a distribution, which is approximately normal. This distribution-type has got a number of very nice properties and a constant pattern, so that "normal" phenomenona can be predicted. In case of non-normal, but deviated or skew distributions, analysis and forecasting become much more complicated. In this article a method has been described in which some non-normal variables can be transformed into normal ones, so that techniques based on normal distributions may be used.  相似文献   

13.
There has been a substantial debate whether GNP has a unit root. However, statistical tests have had little success in distinguishing between unit‐root and trend‐reverting specifications because of poor statistical properties. This paper develops a new exact small‐sample, pointwise most powerful unit root test that is invariant to the unknown mean and scale of the time series tested, that generates exact small‐sample critical values, powers and p‐values, that has power which approximates the maximum possible power, and that is highly robust to conditional heteroscedasticity. This test decisively rejects the unit root null hypothesis when applied to annual US real GNP and US real per capita GNP series. This paper also develops a modified version of the test to address whether a time series contains a permanent, unit root process in addition to a temporary, stationary process. It shows that if these GNP series contain a unit root process in addition to the stationary process, then it is most likely very small. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
Time series data of interest to social scientists often have the property of random walks in which the statistical properties of the series including means and variances vary over time. Such non-stationary series are by definition unpredictable. Failure to meet the assumption of stationarity in the process of analyzing time series variables may result in spurious and unreliable statistical inferences. This paper outlines the problems of using non-stationary data in regression analysis and identifies innovative solutions developed recently in econometrics. Cointegration and error-correction models have recently received positive attention as remedies to the problems of ``spurious regression' arising from non-stationary series. In this paper, we illustrate the relevant statistical concepts concerning these methods by referring to similar concepts used in cross-sectional analysis. An historical example is used to demonstrate how such techniques are applied. It illustrates that ``foreign' immigrants to Canada (1896–1940) experienced elevated levels of social control in areas of high police discretion. ``Foreign' immigration was unrelated to trends in serious crimes but closely related to vagrancy and drunkenness. The merits of cointegration are compared to traditional approaches to the regression analysis of time series.  相似文献   

15.
Statistical Thinking in Empirical Enquiry   总被引:4,自引:0,他引:4  
This paper discusses the thought processes involved in statistical problem solving in the broad sense from problem formulation to conclusions. It draws on the literature and in-depth interviews with statistics students and practising statisticians aimed at uncovering their statistical reasoning processes. From these interviews, a four-dimensional framework has been identified for statistical thinking in empirical enquiry. It includes an investigative cycle, an interrogative cycle, types of thinking and dispositions. We have begun to characterise these processes through models that can be used as a basis for thinking tools or frameworks for the enhancement of problem-solving. Tools of this form would complement the mathematical models used in analysis and address areas of the process of statistical investigation that the mathematical models do not, particularly areas requiring the synthesis of problem-contextual and statistical understanding. The central element of published definitions of statistical thinking is "variation". We further discuss the role of variation in the statistical conception of real-world problems, including the search for causes.  相似文献   

16.
Vast amounts of data that could be used in the development and evaluation of policy for the benefit of society are collected by statistical agencies. It is therefore no surprise that there is very strong demand from analysts, within business, government, universities and other organisations, to access such data. When allowing access to micro‐data, a statistical agency is obliged, often legally, to ensure that it is unlikely to result in the disclosure of information about a particular person or organisation. Managing the risk of disclosure is referred to as statistical disclosure control (SDC). This paper describes an approach to SDC for output from analysis using generalised linear models, including estimates of regression parameters and their variances, diagnostic statistics and plots. The Australian Bureau of Statistics has implemented the approach in a remote analysis system, which returns analysis output from remotely submitted queries. A framework for measuring disclosure risk associated with a remote server is proposed. The disclosure risk and utility of approach are measured in two real‐life case studies and in simulation.  相似文献   

17.
The goal of statistical scale space analysis is to extract scale‐dependent features from noisy data. The data could be for example an observed time series or digital image in which case features in either different temporal or spatial scales would be sought. Since the 1990s, a number of statistical approaches to scale space analysis have been developed, most of them using smoothing to capture scales in the data, but other interpretations of scale have also been proposed. We review the various statistical scale space methods proposed and mention some of their applications.  相似文献   

18.
Our research examines whether there is a causal effect between expanding health insurance and diabetes incidence. Comprehensive county‐level data from the United States is used to study the effect of Medicaid expansion on diabetes rates. The analysis is based on cross‐county variation according to Affordable Care Act health care reforms, along with county share‐eligibility variation. Difference‐in‐difference and triple‐difference statistical regression specifications are employed to control for confounding variables. The results suggest a slight negative relationship between expanding health insurance and diabetes diagnoses.  相似文献   

19.
The paper deals with the statistical modeling of convergence and cohesion over time with the use of kurtosis, skewness and L‐moments. Changes in the shape of the distribution related to the spatial allocation of socio‐economic phenomena are considered as an evidence of global shift, divergence or convergence. Cross‐sectional time‐series statistical modeling of variables of interest is to overpass the minors of econometric theoretical models of convergence and cohesion determinants. L‐moments perform much more stable and interpretable than classical measures. Empirical evidence of panel data proves that one pure pattern (global shift, polarization or cohesion) rarely exists and joint analysis is required.  相似文献   

20.
This paper presents a new empirical approach to address the problem of trading time differences between markets in studies of financial contagion. In contrast to end‐of‐business‐day data common to most contagion studies, we employ price observations, which are exactly aligned in time to correct for time‐zone and end‐of‐business‐day differences between markets. Additionally, we allow for time lags between price observations in order to test the assumption that the shock is not immediately transmitted from one market to the other. Our analysis of the financial turmoil surrounding the Asian crisis reveals that such corrections have an important bearing on the evidence for contagion, independent of the methodology employed. Using a correlation‐based test, we find more contagion the faster we assume the shock to be transmitted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号