首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
Sonja Kuhnt 《Metrika》2010,71(3):281-294
Loglinear Poisson models are commonly used to analyse contingency tables. So far, robustness of parameter estimators as well as outlier detection have rarely been treated in this context. We start with finite-sample breakdown points. We yield that the breakdown point of mean value estimators determines a lower bound for a masking breakdown point of a class of one-step outlier identification rules. Within a more refined breakdown approach, which takes account of the structure of the contingency table, a stochastic breakdown function is defined. It returns the probability that a given proportion of outliers is randomly placed at such a pattern, where breakdown is possible. Finally, the introduced breakdown concepts are applied to characterise the maximum likelihood estimator and a median-polish estimator.  相似文献   

2.
Dr. M. Haber 《Metrika》1984,31(1):195-202
Summary The asymptotic power of the frequency 2 test depends on a noncentrality parameter, ,Mitra [1958] offered a general expression for , which is rather difficult to apply. This work provides simplified formulae for in various models associated with multidimensional contingency tables.  相似文献   

3.
4.
Imputation procedures such as fully efficient fractional imputation (FEFI) or multiple imputation (MI) create multiple versions of the missing observations, thereby reflecting uncertainty about their true values. Multiple imputation generates a finite set of imputations through a posterior predictive distribution. Fractional imputation assigns weights to the observed data. The focus of this article is the development of FEFI for partially classified two-way contingency tables. Point estimators and variances of FEFI estimators of population proportions are derived. Simulation results, when data are missing completely at random or missing at random, show that FEFI is comparable in performance to maximum likelihood estimation and multiple imputation and superior to simple stochastic imputation and complete case anlaysis. Methods are illustrated with four data sets.  相似文献   

5.
Given the undoubtedly major advances in the analysis of contingency tables which have been achieved over the past ten years or so (see for example, Fienberg, 1980; Upton, 1978; Everitt, 1977; Haberman, 1978, 1979), it might seem rather unnecessary to want to return to first princples again. However, the need arises precisely because of these advances; for progress in the specifically causal analysis of contingency tables has not matched the other advances at all. Whilst Fienberg devoted a chapter to causal analysis, he made it clear that he views “the assignment of numerical values [to the arrows in a path diagram] as problematic, and [he] would limit [the analysis] to an indication of sign for causal relationships, in a fashion similar to that described by Blalock (1964)” (Fienberg, 1980, pp. 91–92). Considering how far quantitative-variable causal analysis has developed since Blalock (1964), it becomes clear that the causal analysis of qualitative data is still at a rather primitive stage. Indeed, Haberman (1978, 1979), in his two-volume survey of the analysis of qualitative data, does not mention it at all. The problem, I believe, is that log-linear and logit methodology are not particularly suited to the logic of causality in contingency tables. In order to derive a suitable method, it is necessary to uncover the logic underlying causality when applied to qualitative variables. A few others have taken seriously the idea that a direct analysis of the form of a contingency table can lead to fruitful results (see, especially, for example, Boudon, 1967), but their work has been overshadowed by the statistically more profound advances made in log-linear methods. This article is an attempt to provide a statistically rigorous analysis based on the direct interpretation of causality embodied in a contingency table.  相似文献   

6.
For contingency tables with extensive missing data, the unrestricted MLE under the saturated model, computed by the EM algorithm, is generally unsatisfactory. In this case, it may be better to fit a simpler model by imposing some restrictions on the parameter space. Perlman and Wu (1999) propose lattice conditional independence (LCI) models for contingency tables with arbitrary missing data patterns. When this LCI model fits well, the restricted MLE under the LCI model is more accurate than the unrestricted MLE under the saturated model, but not in general. Here we propose certain empirical Bayes (EB) estimators that adaptively combine the best features of the restricted and unrestricted MLEs. These EB estimators appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain. We also study a restricted EM algorithm (called the ER algorithm) with similar desirable features. Received: July 1999  相似文献   

7.
Interference about conditional independence in relation to log linear models are discussed for contingency tables. The parameters and likelihood ratios for a log linear model with a dependent variable are shown to be identical to those for a multivariate model. An approximaate method of calculating log likelihood ratios, even when all dimensions of the table have more than two levels (no binary variables) is derived. The implications for sociological “causal” models are discussed.  相似文献   

8.
9.
10.
Bayesian and empirical Bayesian estimation methods are reviewed and proposed for the row and column parameters in two-way Contingency tables without interaction. Rasch's multiplicative Poisson model for misreadings is discussed in an example. The case is treated where assumptions of exchangeability are reasonable a priori for the unknown parameters. Two different types of prior distributions are compared, It appears that gamma priors yield more tractable results than lognormal priors.  相似文献   

11.
The spatial interaction between two or more classes might cause multivariate clustering patterns such as segregation or association, which can be tested using a nearest neighbor contingency table (NNCT). The null hypothesis is randomness in the nearest neighbor structure, which may result from random labeling (RL) or complete spatial randomness of points from two or more classes (which is henceforth called CSR independence ). We consider Dixon's class-specific segregation test and introduce a new class-specific test, which is a new decomposition of Dixon's overall chi-squared segregation statistic. We analyze the distributional properties and compare the empirical significant levels and power estimates of the tests using extensive Monte Carlo simulations. We demonstrate that the new class-specific tests have comparable performance with the currently available tests based on NNCTs. For illustrative purposes, we use three example data sets and provide guidelines for using these tests.  相似文献   

12.
For square contingency tables with ordered categories, CAUSSINUS [Annales de la Faculté des Sciences de l'Université de Toulouse (1965) Vol. 29, pp. 77–182] and AGRESTI [Statistics and Probability Letters (1983) Vol. 1, pp. 313–316] considered the quasi-symmetry and the linear diagonal-parameter symmetry models, respectively, which have multiplicative forms for cell probabilities. This paper proposes two kinds of models that have the similar multiplicative forms for cumulative probabilities that an observation will fall in row (column) category i or below and column (row) category j (> i ) or above. The endometrial cancer data are analyzed using these models.  相似文献   

13.
In this paper we study a new class of statistical models for contingency tables. We define this class of models through a subset of the binomial equations of the classical independence model. We prove that they are log-linear and we use some notions from Algebraic Statistics to compute their sufficient statistic and their parametric representation. Moreover, we show how to compute maximum likelihood estimates and to perform exact inference through the Diaconis-Sturmfels algorithm. Examples show that these models can be useful in a wide range of applications.  相似文献   

14.
Abstract Cochran [3] derives a test of association when k 2 × 2 contingency tables are combined. We show in this paper how to extend Cochran's test to the combining of k r×c contingency tables using a multiple comparison technique similar to the one presented by Dunn [4]. An example is included.  相似文献   

15.
《Statistica Neerlandica》1959,13(4):433-444
Om te toets of die twee kenmerke waarvolgens die frekwensies in 'n gebeurlikseidstabel opgestel is onafhanklik is, word gewoonlik 'n X2 toets gebruik. In die opstelling van so 'n toets word die totale frekwensie as vas beskou. In hierdie artikel word aangetoon hoe 'n X2-toets nog gebruik kan word wanneer die steekproefneming voortgesit word totdat 'n vooraf vasgestelde frekwensie in een selbereik word. Die totale frekwensie is dan 'n stogastiese veranderlike. Die eienskappe van die randverdelings van 'n negatiewe multinomiaalverdeling word gebruik om die parameters wat in die X2-verdeling optree te skat.  相似文献   

16.
17.
We argue that the meaning of “scientific productivity” takes on various forms under different conditions. The methodology offered in this paper demonstrates that fluctuations in reliability coefficients (and possibly the validity of the construct) are associated with work context (academic vs. non-academic), individual attributes (young vs. veteran researchers), professional affiliation (scientists vs. engineers) and research characteristics (theoretical vs. experimental, externally vs. internally funded research). These results are of critical importance for the evaluation of scientific work, especially since they imply the existence of contexts in which several productivity indicators are invalid. The conceptual and the methodological implications of the results are discussed.  相似文献   

18.
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

19.
Several jackknife estimators of a relative risk in a single 2×2 contingency table and of a common relative risk in a 2×2× K contingency table are presented. The estimators are based on the maximum likelihood estimator in a single table and on an estimator proposed by Tarone (1981) for stratified samples, respectively. For the stratified case, a sampling scheme is assumed where the number of observations within each table tends to infinity but the number of tables remains fixed. The asymptotic properties of the above estimators are derived. Especially, we present two general results which under certain regularity conditions yield consistency and asymptotic normality of every jackknife estimator of a bunch of functions of binomial probabilities.  相似文献   

20.
Statistical offices in developing countries are confronted with the challenge of introducing, with limited means, methods and tools ensuring the production of durable statistics satisfying regional, national and international requirements. Taking into account the evolution, over the last years, in the attitude of national authorities, funding partners and statisticians themselves regarding the building up of relevant and durable statistics, the present paper's proposal is to have, in each developing country, a Minimum Programme for Statistics (MPS) and that this programme be centred on four domains: coordination, national accounts, economic and social short term analysis, and dissemination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号