首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
The paper reviews recent work on statistical methods for using linkage disequilibrium to locate disease susceptibility genes, given a set of marker genes at known positions in the genome. The paper starts by considering a simple deterministic model for linkage disequilibrium and discusses recent attempts to elaborate it to include the effects of stochastic influences, of "drift", by the use of either Writht-Fisher models or by approaches based on the coalescence of the genealogy of the sample of disease chromosomes. Most of this first part of the paper concerns a series of diallelic markers and, in this case, the models so far proposed are hierarchical probability models for multivariate binary data. Likelihoods are intractable and most approaches to linkage disequilibrium mapping amount to marginal models for pairwise associations between individual markers and the disease susceptibility locus. Approaches to evalutation of a full likelihood require Monte Carlo methods in order to integrate over the large number of unknowns. The fact that the initial state of the stochastic process which has led to present-day allele frequencies is unknown is noted and its implications for the hierarchical probability model is discussed. Difficulties and opportunities arising as a result of more polymorphic markers and extended marker haplotypes are indicated. Connections between the hierarchical modelling approach and methods based upon identity by descent and haplotype sharing by seemingly unrelated case are explored. Finally problems resulting from unknown modes of inheritance, incomplete penetrance, and "phenocopies" are briefly reviewed.  相似文献   

2.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

3.
Maximum likelihood estimation of monotone and concave production frontiers   总被引:4,自引:4,他引:0  
In this paper we bring together the previously separate parametric and nonparametric approaches to production frontier estimation by developing composed error models for maximum likelihood estimation from nonparametrically specified classes of frontiers. This approach avoids the untestable restrictions of parametric functional forms and also provides a statistical foundation for nonparametric frontier estimation. We first examine the single output setting and then extend our formulation to the multiple output setting. The key step in developing the estimation problems is to identify operational constraint sets to ensure estimation from the desired class of frontiers. We also suggest algorithms for solving the resulting constrained likelihood function optimization problems.The refereeing process of this paper was handled through R. Robert Russell. Helpful comments from Bob Russell and two anonymous referees are gratefully acknowedged. We are, of course, solely responsible for any remaining errors or omissions.  相似文献   

4.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

5.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

6.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

7.
A study of business strategy was carried out in 86 organizations in the crop protection industry. A multi-operational approach was used to enable validation of data by triangulation, including cognitive mapping used in an unusual way. This provided an unintended opportunity to conduct a comparative evaluation of interactive investigational methods in a relatively controlled, if unsophisticated manner. Results were interesting enough to suggest that further investigation is needed into the impact of various subject-generated factors such as face validity on methodological effectivness, as well as more traditional criteria such as construct validity of particular methods. Accordingly, process issues affecting repertory grids, cognitive mapping and software for the analysis of cognitive maps (COPE) are described and discussed. Recommendations are made for improvements to mapping and software and further studies suggested.  相似文献   

8.
Tiefeng Ma  Shuangzhe Liu 《Metrika》2013,76(3):409-425
In this paper, the estimation of order-restricted means of two normal distributions is studied under the LINEX loss function, when the variances are unknown and possibly unequal. Under certain sufficient conditions to be described in this paper, the proposed plug-in estimators uniformly perform better than the existing unrestricted maximum likelihood estimators. Further, the restricted maximum likelihood estimators are compared with the unrestricted maximum likelihood estimators under the Pitman nearness criterion. A simulation study is conducted and it is shown that our proposed plug-in estimators perform better than the unrestricted maximum likelihood estimators. An illustrative example of real data analysis is also given to compare the estimators.  相似文献   

9.
We consider estimation and testing of linkage equilibrium from genotypic data on a random sample of sibs, such as monozygotic and dizygotic twins. We compute the maximum likelihood estimator with an EM‐algorithm and a likelihood ratio statistic that takes the family structure into account. As we are interested in applying this to twin data we also allow observations on single children, so that monozygotic twins can be included. We allow non‐zero recombination fraction between the loci of interest, so that linkage disequilibrium between both linked and unlinked loci can be tested. The EM‐algorithm for computing the maximum likelihood estimator of the haplotype frequencies and the likelihood ratio test‐statistic, are described in detail. It is shown that the usual estimators of haplotype frequencies based on ignoring that the sibs are related are inefficient, and the likelihood ratio test for testing that the loci are in linkage disequilibrium.  相似文献   

10.
A method is presented for the estimation of the parameters in the dynamic simultaneous equations model with vector autoregressive moving average disturbances. The estimation procedure is derived from the full information maximum likelihood approach and is based on Newton-Raphson techniques applied to the likelihood equations. The resulting two-step Newton-Raphson procedure involves only generalized instrumental variables estimation in the second step. This procedure also serves as the basis for an iterative scheme to solve the normal equations and obtain the maximum likelihood estimates of the conditional likelihood function. A nine-equation variant of the quarterly forecasting model of the US economy developed by Fair is then used as a realistic example to illustrate the estimation procedure described in the paper.  相似文献   

11.
This paper considers a panel stochastic production frontier model that allows the dynamic adjustment of technical inefficiency. In particular, we assume that inefficiency follows an AR(1) process. That is, the current year's inefficiency for a firm depends on its past inefficiency plus a transient inefficiency incurred in the current year. Interfirm variations in the transient inefficiency are explained by some firm-specific covariates. We consider four likelihood-based approaches to estimate the model: the full maximum likelihood, pairwise composite likelihood, marginal composite likelihood, and quasi-maximum likelihood approaches. Moreover, we provide Monte Carlo simulation results to examine and compare the finite-sample performances of the four above-mentioned likelihood-based estimators of the parameters. Finally, we provide an empirical application of a panel of 73 Finnish electricity distribution companies observed during 2008–2014 to illustrate the working of our proposed models.  相似文献   

12.
The notion of cointegration has led to a renewed interest in the identification and estimation of structural relations among economic time series. This paper reviews the different approaches that have been put forward in the literature for identifying cointegrating relationships and imposing (possibly over-identifying) restrictions on them. Next, various algorithms to obtain (approximate) maximum likelihood estimates and likelihood ratio statistics are reviewed, with an emphasis on so-called switching algorithms. The implementation of these algorithms is discussed and illustrated using an empirical example.  相似文献   

13.
Effective linkage detection and gene mapping requires analysis of data jointly on members of extended pedigrees, jointly at multiple genetic markers. Exact likelihood computation is then often infeasible, but Markov chain Monte Carlo (MCMC) methods permit estimation of posterior probabilities of genome sharing among relatives, conditional upon marker data. In principle, MCMC also permits estimation of linkage analysis location score curves, but in practice effective MCMC samplers are hard to find. Although the whole-meiosis Gibbs sampler (M-sampler) performs well in some cases, for extended pedigrees and tightly linked markers better samplers are needed. However, using the M-sampler as a proposal distribution in a Metropolis-Hastings algorithm does allow genetic interference to be incorporated into the analysis.  相似文献   

14.
A demonstration is provided of rigorous, statistical methodology whereby both the type and order of an error process can be identified in dynamic, single equation econometric models. The paper relies heavily upon maximum likelihood estimation, nested likelihood ratio tests and the overfitting or exponentially weighted procedure for model selection. An application of the methodology to a class of quarterly wage determination models is included.  相似文献   

15.
We offer an exposition of modern higher order likelihood inference and introduce software to implement this in a quite general setting. The aim is to make more accessible an important development in statistical theory and practice. The software, implemented in an R package, requires only that the user provide code to compute the likelihood function and to specify extra‐likelihood aspects of the model, such as stopping rule or censoring model, through a function generating a dataset under the model. The exposition charts a narrow course through the developments, intending thereby to make these more widely accessible. It includes the likelihood ratio approximation to the distribution of the maximum likelihood estimator, that is the p? formula, and the transformation of this yielding a second‐order approximation to the distribution of the signed likelihood ratio test statistic, based on a modified signed likelihood ratio statistic r?. This follows developments of Barndorff‐Nielsen and others. The software utilises the approximation to required Jacobians as developed by Skovgaard, which is included in the exposition. Several examples of using the software are provided.  相似文献   

16.
Statistical inference and nonparametric efficiency: A selective survey   总被引:1,自引:2,他引:1  
The purpose of this paper is to provide a brief and selective survey of statistical inference in nonparametric, deterministic, linear programming-based frontier models. The survey starts with nonparametric regularity tests, sensitivity analysis, two-stage analysis with regression, and nonparametric statistical tests. It then turns to the more recent literature which shows that DEA-type estimators are maximum likelihood, and, more importantly the results concerning the asymptotic properties of these estimators. Also included is a discussion of recent attempts to employ resampling methods to derive empirical distributions for hypothesis testing.  相似文献   

17.
According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.  相似文献   

18.
This paper introduces a new representation for seasonally cointegrated variables, namely the complex error correction model, which allows statistical inference to be performed by reduced rank regression. The suggested estimators and tests statistics are asymptotically equivalent to their maximum likelihood counterparts. The small sample properties are evaluated by a Monte Carlo study and an empirical example is presented to illustrate the concepts and methods.  相似文献   

19.
MAPS FOR MANAGERS: WHERE ARE WE? WHERE DO WE GO FROM HERE?   总被引:6,自引:0,他引:6  
Research on managerial cognition in genetal, and on cognitive mapping in particular, is receving a great deal of attention in Europe and the US, but the work being done is currently disparate and loosely coupled. Furthermor, the development of maps as a decision aid has tended to focus on specific sub-areas of cognition. In this article we argue that the broad strategic concerns of managers require a portfolio of different kinds of cognitive maps. The interactions among these maps are as important as the functions of each one separately. We develop a framework for classifying cognitice maps and argue for the importance of managing multiple maps.  相似文献   

20.
This paper incorporates vintage differences and forecasts into the Markov switching models described by Hamilton (1994). The vintage differences and forecasts induce parameter breaks close to the end of the sample, too close for standard maximum likelihood techniques to produce precise parameter estimates. A supplementary procedure estimates the statistical properties of the end-of-sample observations that behave differently from the rest, allowing inferred probabilities to reflect the breaks. Empirical results using real-time data show that these techniques improve the ability of a Markov switching model based on GDP and GDI to recognize the start of the 2001 recession.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号