首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In missing data problems, it is often the case that there is a natural test statistic for testing a statistical hypothesis had all the data been observed. A fuzzy  p -value approach to hypothesis testing has recently been proposed which is implemented by imputing the missing values in the "complete data" test statistic by values simulated from the conditional null distribution given the observed data. We argue that imputing data in this way will inevitably lead to loss in power. For the case of scalar parameter, we show that the asymptotic efficiency of the score test based on the imputed "complete data" relative to the score test based on the observed data is given by the ratio of the observed data information to the complete data information. Three examples involving probit regression, normal random effects model, and unidentified paired data are used for illustration. For testing linkage disequilibrium based on pooled genotype data, simulation results show that the imputed Neyman Pearson and Fisher exact tests are less powerful than a Wald-type test based on the observed data maximum likelihood estimator. In conclusion, we caution against the routine use of the fuzzy  p -value approach in latent variable or missing data problems and suggest some viable alternatives.  相似文献   

2.
Consider the loglinear model for categorical data under the assumption of multinomial sampling. We are interested in testing between various hypotheses on the parameter space when we have some hypotheses relating to the parameters of the models that can be written in terms of constraints on the frequencies. The usual likelihood ratio test, with maximum likelihood estimator for the unspecified parameters, is generalized to tests based on -divergence statistics, using minimum -divergence estimator. These tests yield the classical likelihood ratio test as a special case. Asymptotic distributions for the new -divergence test statistics are derived under the null hypothesis.  相似文献   

3.
We offer an exposition of modern higher order likelihood inference and introduce software to implement this in a quite general setting. The aim is to make more accessible an important development in statistical theory and practice. The software, implemented in an R package, requires only that the user provide code to compute the likelihood function and to specify extra‐likelihood aspects of the model, such as stopping rule or censoring model, through a function generating a dataset under the model. The exposition charts a narrow course through the developments, intending thereby to make these more widely accessible. It includes the likelihood ratio approximation to the distribution of the maximum likelihood estimator, that is the p? formula, and the transformation of this yielding a second‐order approximation to the distribution of the signed likelihood ratio test statistic, based on a modified signed likelihood ratio statistic r?. This follows developments of Barndorff‐Nielsen and others. The software utilises the approximation to required Jacobians as developed by Skovgaard, which is included in the exposition. Several examples of using the software are provided.  相似文献   

4.
We discuss a regression model in which the regressors are dummy variables. The basic idea is that the observation units can be assigned to some well-defined combination of treatments, corresponding to the dummy variables. This assignment can not be done without some error, i.e. misclassification can play a role. This situation is analogous to regression with errors in variables. It is well-known that in these situations identification of the parameters is a prominent problem. We will first show that, in our case, the parameters are not identified by the first two moments but can be identified by the likelihood. Then we analyze two estimators. The first is a moment estimator involving moments up to the third order, and the second is a maximum likelihood estimator calculated with the help of the EM algorithm. Both estimators are evaluated on the basis of a small Monte Carlo experiment.  相似文献   

5.
The present paper investigates several issues of estimation and hypothesis testing in the context of a single-market disequilibrium model. The paper attempts to shed light on the following four questions: (1) What are the small-sample properties of the maximum likelihood estimator in various disequilibrium models? (2) How can one test the hypothesis of equilibrium vs. disequilibrium? (3) Can one reasonably estimate the unobservable demand and supply quantities from observable data? and (4) What are the consequences of using an equilibrium model instead of a disequilibrium one, or of using a misspecified disequilibrium model? Each of these questions is examined with the aid of sampling experiments.  相似文献   

6.
The restricted maximum likelihood is preferred by many to the full maximum likelihood for estimation with variance component and other random coefficient models, because the variance estimator is unbiased. It is shown that this unbiasedness is accompanied in some balanced designs by an inflation of the mean squared error. An estimator of the cluster‐level variance that is uniformly more efficient than the full maximum likelihood is derived. Estimators of the variance ratio are also studied.  相似文献   

7.
Finite mixtures offer a rich class of distributions for modelling of a variety of random phenomena in numerous fields of study. Using the sample interpoint distances (IPDs), we propose the IPD‐test statistic for testing the hypothesis of homogeneity of mixture of multivariate power series distribution or multivariate normal distribution. We derive the distribution of the IPDs that are drawn from a finite mixture of the multivariate power series distribution and multivariate normal distribution. Based on the empirical distribution of the IPDs, we construct a bootstrap test of homogeneity for other multivariate finite mixture models. The IPD test is applied to mixture models for matrix‐valued distributions and a test of homogeneity for Wishart mixture is presented. Numerical comparisons show that IPD test has accurate type I errors and is more powerful in most multivariate cases than the expectation–maximization (EM) test and modified likelihood ratio test.  相似文献   

8.
The paper reviews recent work on statistical methods for using linkage disequilibrium to locate disease susceptibility genes, given a set of marker genes at known positions in the genome. The paper starts by considering a simple deterministic model for linkage disequilibrium and discusses recent attempts to elaborate it to include the effects of stochastic influences, of "drift", by the use of either Writht-Fisher models or by approaches based on the coalescence of the genealogy of the sample of disease chromosomes. Most of this first part of the paper concerns a series of diallelic markers and, in this case, the models so far proposed are hierarchical probability models for multivariate binary data. Likelihoods are intractable and most approaches to linkage disequilibrium mapping amount to marginal models for pairwise associations between individual markers and the disease susceptibility locus. Approaches to evalutation of a full likelihood require Monte Carlo methods in order to integrate over the large number of unknowns. The fact that the initial state of the stochastic process which has led to present-day allele frequencies is unknown is noted and its implications for the hierarchical probability model is discussed. Difficulties and opportunities arising as a result of more polymorphic markers and extended marker haplotypes are indicated. Connections between the hierarchical modelling approach and methods based upon identity by descent and haplotype sharing by seemingly unrelated case are explored. Finally problems resulting from unknown modes of inheritance, incomplete penetrance, and "phenocopies" are briefly reviewed.  相似文献   

9.
Price indices for heterogeneous goods such as real estate or fine art constitute crucial information for institutional or private investors considering alternative investment decisions in times of financial markets turmoil. Classical mean‐variance analysis of alternative investments has been hampered by the lack of a systematic treatment of volatility in these markets. In this paper we propose a hedonic regression framework which explicitly defines an underlying stochastic process for the price index, allowing to treat the volatility parameter as the object of interest. The model can be estimated using maximum likelihood in combination with the Kalman filter. We derive theoretical properties of the volatility estimator and show that it outperforms the standard estimator. We show that extensions to allow for time‐varying volatility are straightforward using a local‐likelihood approach. In an application to a large data set of international blue chip artists, we show that volatility of the art market, although generally lower than that of financial markets, has risen after the financial crisis of 2008–09, but sharply decreased during the recent debt crisis. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
A test statistic is developed for making inference about a block‐diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N ? 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher‐order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.  相似文献   

11.
A frequently occurring problem is to find the maximum likelihood estimation (MLE) of p subject to pC (CP the probability vectors in R k ). The problem has been discussed by many authors and they mainly focused when p is restricted by linear constraints or log-linear constraints. In this paper, we construct the relationship between the the maximum likelihood estimation of p restricted by pC and EM algorithm and demonstrate that the maximum likelihood estimator can be computed through the EM algorithm (Dempster et al. in J R Stat Soc Ser B 39:1–38, 1997). Several examples are analyzed by the proposed method.  相似文献   

12.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

13.
We consider nonparametric/semiparametric estimation and testing of econometric models with data dependent smoothing parameters. Most of the existing works on asymptotic distributions of a nonparametric/semiparametric estimator or a test statistic are based on some deterministic smoothing parameters, while in practice it is important to use data-driven methods to select the smoothing parameters. In this paper we give a simple sufficient condition that can be used to establish the first order asymptotic equivalence of a nonparametric estimator or a test statistic with stochastic smoothing parameters to those using deterministic smoothing parameters. We also allow for general weakly dependent data.  相似文献   

14.
We reexamine the methods used in estimating comovements among US regional home prices and find that there are insufficient moments to ensure a normal limit necessary for employing the quasi‐maximum likelihood estimator. Hence we propose applying the self‐weighted quasi‐maximum exponential likelihood estimator and a bootstrap method to test and account for the asymmetry of comovements as well as different magnitudes across state pairs. Our results reveal interstate asymmetric tail dependence based on observed house price indices rather than residuals from fitting autoregressive–generalized autoregressive conditional heteroskedasticity (AR‐GARCH) models.  相似文献   

15.
Xiuli Wang  Gaorong Li  Lu Lin 《Metrika》2011,73(2):171-185
In this paper, we apply empirical likelihood method to study the semi-parametric varying-coefficient partially linear errors-in-variables models. Empirical log-likelihood ratio statistic for the unknown parameter β, which is of primary interest, is suggested. We show that the proposed statistic is asymptotically standard chi-square distribution under some suitable conditions, and hence it can be used to construct the confidence region for the parameter β. Some simulations indicate that, in terms of coverage probabilities and average lengths of the confidence intervals, the proposed method performs better than the least-squares method. We also give the maximum empirical likelihood estimator (MELE) for the unknown parameter β, and prove the MELE is asymptotically normal under some suitable conditions.  相似文献   

16.
There is by now a long tradition of using the EM algorithm to find maximum‐likelihood estimates (MLEs) when the data are incomplete in any of a wide range of ways, even when the observed‐data likelihood can easily be evaluated and numerical maximisation of that likelihood is available as a conceptually simple route to the MLEs. It is rare in the literature to see numerical maximisation employed if EM is possible. But with excellent general‐purpose numerical optimisers now available free, there is no longer any reason, as a matter of course, to avoid direct numerical maximisation of likelihood. In this tutorial, I present seven examples of models in which numerical maximisation of likelihood appears to have some advantages over the use of EM as a route to MLEs. The mathematical and coding effort is minimal, as there is no need to derive and code the E and M steps, only a likelihood evaluator. In all the examples, the unconstrained optimiser nlm available in R is used, and transformations are used to impose constraints on parameters. I suggest therefore that the following question be asked of proposed new applications of EM: Can the MLEs be found more simply and directly by using a general‐purpose numerical optimiser?  相似文献   

17.
Y. Takagi 《Metrika》2010,71(1):17-31
We discuss the problem of testing the non-inferiority of a new treatment compared with several standard ones in a survival model where the underlying distribution is exponential and censoring times are fixed. We derive the asymptotic distribution of the likelihood ratio statistic for k-samples. We construct a testing procedure with asymptotic size α based on the likelihood ratio statistic.  相似文献   

18.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

19.
We present a nonparametric study of current status data in the presence of death. Such data arise from biomedical investigations in which patients are examined for the onset of a certain disease, for example, tumor progression, but may die before the examination. A key difference between such studies on human subjects and the survival–sacrifice model in animal carcinogenicity experiments is that, due to ethical and perhaps technical reasons, deceased human subjects are not examined, so that the information on their disease status is lost. We show that, for current status data with death, only the overall and disease‐free survival functions can be identified, whereas the cumulative incidence of the disease is not identifiable. We describe a fast and stable algorithm to estimate the disease‐free survival function by maximizing a pseudo‐likelihood with plug‐in estimates for the overall survival rates. It is then proved that the global rate of convergence for the nonparametric maximum pseudo‐likelihood estimator is equal to Op(n?1/3) or the convergence rate of the estimated overall survival function, whichever is slower. Simulation studies show that the nonparametric maximum pseudo‐likelihood estimators are fairly accurate in small‐ to medium‐sized samples. Real data from breast cancer studies are analyzed as an illustration.  相似文献   

20.
We generalize the weak instrument robust score or Lagrange multiplier and likelihood ratio instrumental variables (IV) statistics towards multiple parameters and a general covariance matrix so they can be used in the generalized method of moments (GMM). The GMM extension of Moreira's [2003. A conditional likelihood ratio test for structural models. Econometrica 71, 1027–1048] conditional likelihood ratio statistic towards GMM preserves its expression except that it becomes conditional on a statistic that tests the rank of a matrix. We analyze the spurious power decline of Kleibergen's [2002. Pivotal statistics for testing structural parameters in instrumental variables regression. Econometrica 70, 1781–1803, 2005. Testing parameters in GMM without assuming that they are identified. Econometrica 73, 1103–1124] score statistic and show that an independent misspecification pre-test overcomes it. We construct identification statistics that reflect if the confidence sets of the parameters are bounded. A power study and the possible shapes of confidence sets illustrate the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号