首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 0 毫秒
1.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

2.
Falk Bathe  Jürgen Franz 《Metrika》1996,43(1):149-164
The availability of a stochastic repairable system depends on the failure behaviour and on repair strategies. In this paper, we deal with a general repair model for a system using auxiliary counting processes and corresponding intensities which include various degrees of repair (between minimal repair and perfect repair). For determining the model parameters we need estimators depending on failure times and repair times: maximum likelihood (ML) estimator and Bayes estimators are considered. Special results are obtained by the use of Weibull-type intensities and random observation times.  相似文献   

3.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

4.
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

5.
We obtain a relation between the time between two bird-catchings and the total resting period of a bird, leading to the problem of estimating the derivative of a convex density. We state a fundamental result on the nonparametric maximum likelihood estimator of a convex density. Further, we derive the optimal rate in the minimax risk sense for estimating the derivative of a convex density.  相似文献   

6.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

7.
Three tests for the skewness of an unknown distribution are derived for iid data. They are based on suitable normalization of estimators of some usual skewness coefficients. Their asymptotic null distributions are derived. The tests are next shown to be consistent and their power under some sequences of local alternatives is investigated. Their finite sample properties are also studied through a simulation experiment, and compared to those of the √ b 2-test.  相似文献   

8.
This paper presents a method for estimating the model Λ(Y)=min(β′X+U, C), where Y is a scalar, Λ is an unknown increasing function, X is a vector of explanatory variables, β is a vector of unknown parameters, U has unknown cumulative distribution function F, and C is a censoring threshold. It is not assumed that Λ and F belong to known parametric families; they are estimated nonparametrically. This model includes many widely used models as special cases, including the proportional hazards model with unobserved heterogeneity. The paper develops n1/2-consistent, asymptotically normal estimators of Λ and F. Estimators of β that are n1/2-consistent and asymptotically normal already exist. The results of Monte Carlo experiments illustrate the finite-sample behavior of the estimators.  相似文献   

9.
This article examines volatility models for modeling and forecasting the Standard & Poor 500 (S&P 500) daily stock index returns, including the autoregressive moving average, the Taylor and Schwert generalized autoregressive conditional heteroscedasticity (GARCH), the Glosten, Jagannathan and Runkle GARCH and asymmetric power ARCH (APARCH) with the following conditional distributions: normal, Student's t and skewed Student's t‐distributions. In addition, we undertake unit root (augmented Dickey–Fuller and Phillip–Perron) tests, co‐integration test and error correction model. We study the stationary APARCH (p) model with parameters, and the uniform convergence, strong consistency and asymptotic normality are prove under simple ordered restriction. In fitting these models to S&P 500 daily stock index return data over the period 1 January 2002 to 31 December 2012, we found that the APARCH model using a skewed Student's t‐distribution is the most effective and successful for modeling and forecasting the daily stock index returns series. The results of this study would be of great value to policy makers and investors in managing risk in stock markets trading.  相似文献   

10.
This article presents the empirical Bayes method for estimation of the transition probabilities of a generalized finite stationary Markov chain whose ith state is a multi-way contingency table. We use a log-linear model to describe the relationship between factors in each state. The prior knowledge about the main effects and interactions will be described by a conjugate prior. Following the Bayesian paradigm, the Bayes and empirical Bayes estimators relative to various loss functions are obtained. These procedures are illustrated by a real example. Finally, asymptotic normality of the empirical Bayes estimators are established.  相似文献   

11.
For a multilevel model with two levels and only a random intercept, the quality of different estimators of the random intercept is examined. Analytical results are given for the marginal model interpretation where negative estimates of the variance components are allowed for. Except for four or five level-2 units, the Empirical Bayes Estimator (EBE) has a lower average Bayes risk than the Ordinary Least Squares Estimator (OLSE). The EBEs based on restricted maximum likelihood (REML) estimators of the variance components have a lower Bayes risk than the EBEs based on maximum likelihood (ML) estimators. For the hierarchical model interpretation, where estimates of the variance components are restricted being positive, Monte Carlo simulations were done. In this case the EBE has a lower average Bayes risk than the OLSE, also for four or five level-2 units. For large numbers of level-1 (30) or level-2 units (100), the performances of REML-based and ML-based EBEs are comparable. For small numbers of level-1 (10) and level-2 units (25), the REML-based EBEs have a lower Bayes risk than ML-based EBEs only for high intraclass correlations (0.5).  相似文献   

12.
The classes of monotone or convex (and necessarily monotone) densities on     can be viewed as special cases of the classes of k - monotone densities on     . These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (∞-monotone) densities on     . In this paper we consider non-parametric maximum likelihood and least squares estimators of a k -monotone density g 0. We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k −1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivatives     , at a fixed point x 0 under the assumption that     .  相似文献   

13.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

14.
In this paper, we consider portmanteau tests for testing the adequacy of multiplicative seasonal autoregressive moving‐average models under the assumption that the errors are uncorrelated but not necessarily independent. We relax the standard independence assumption on the error terms in order to extend the range of applications of the seasonal autoregressive moving‐average models. We study the asymptotic distributions of residual and normalized residual empirical autocovariances and autocorrelations under weak assumptions on noise. We establish the asymptotic behavior of the proposed statistics. A set of Monte Carlo experiments and an application to monthly mean total sunspot number are presented.  相似文献   

15.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

16.
We consider the Cox regression model and study the asymptotic global behavior of the Grenander-type estimator for a monotone baseline hazard function. This model is not included in the general setting of Durot (2007). However, we show that a similar central limit theorem holds for Lp-error of the Grenander-type estimator. As an illustration of application of our main result, we propose a test procedure for a Weibull baseline distribution, based on the Lp-distance between the Grenander estimator and a parametric estimator of the baseline hazard. Simulation studies are performed to investigate the performance of this test.  相似文献   

17.
Space–time autoregressive (STAR) models, introduced by Cliff and Ord [Spatial autocorrelation (1973) Pioneer, London] are successfully applied in many areas of science, particularly when there is prior information about spatial dependence. These models have significantly fewer parameters than vector autoregressive models, where all information about spatial and time dependence is deduced from the data. A more flexible class of models, generalized STAR models, has been introduced in Borovkova et al. [Proc. 17th Int. Workshop Stat. Model. (2002), Chania, Greece] where the model parameters are allowed to vary per location. This paper establishes strong consistency and asymptotic normality of the least squares estimator in generalized STAR models. These results are obtained under minimal conditions on the sequence of innovations, which are assumed to form a martingale difference array. We investigate the quality of the normal approximation for finite samples by means of a numerical simulation study, and apply a generalized STAR model to a multivariate time series of monthly tea production in west Java, Indonesia.  相似文献   

18.
Recently, there has been a renewed interest in the class of stochastic blockmodels (SBM) and their applications to multi-subject brain networks. In our most recent work, we have considered an extension of the classical SBM, termed heterogeneous SBM (Het-SBM), that models subject variability in the cluster-connectivity profiles through the addition of a logistic regression model with subject-specific covariates on the level of each block. Although this model has proved to be useful in both the clustering and inference aspects of multi-subject brain network data, including fleshing out differences in connectivity between patients and controls, it does not account for dependencies that may exist within subjects. To overcome this limitation, we propose an extension of Het-SBM, termed Het-Mixed-SBM, in which we model the within-subject dependencies by adding subject- and block-level random intercepts in the embedded logistic regression model. Using synthetic data, we investigate the accuracy of the partitions estimated by our proposed model as well as the validity of inference procedures based on the Wald and permutation tests. Finally, we illustrate the model by analyzing the resting-state fMRI networks of 99 healthy volunteers from the Human Connectome Project (HCP) using covariates like age, gender, and IQ to explain the clustering patterns observed in the data.  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号