首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Hinkley (1977) derived two tests for testing the mean of a normal distribution with known coefficient of variation (c.v.) for right alternatives. They are the locally most powerful (LMP) and the conditional tests based on the ancillary statistic for μ. In this paper, the likelihood ratio (LR) and Wald tests are derived for the one‐ and two‐sided alternatives, as well as the two‐sided version of the LMP test. The performances of these tests are compared with those of the classical t, sign and Wilcoxon signed rank tests. The latter three tests do not use the information on c.v. Normal approximation is used to approximate the null distribution of the test statistics except for the t test. Simulation results indicate that all the tests maintain the type‐I error rates, that is, the attained level is close to the nominal level of significance of the tests. The power functions of the tests are estimated through simulation. The power comparison indicates that for one‐sided alternatives the LMP test is the best test whereas for the two‐sided alternatives the LR or the Wald test is the best test. The t, sign and Wilcoxon signed rank tests have lower power than the LMP, LR and Wald tests at various alternative values of μ. The power difference is quite large in several simulation configurations. Further, it is observed that the t, sign and Wilcoxon signed rank tests have considerably lower power even for the alternatives which are far away from the null hypothesis when the c.v. is large. To study the sensitivity of the tests for the violation of the normality assumption, the type I error rates are estimated on the observations of lognormal, gamma and uniform distributions. The newly derived tests maintain the type I error rates for moderate values of c.v.  相似文献   

2.
We calculate, by simulations, numerical asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real‐valued parameter b which must be estimated, simple tabulation is not feasible. Partly owing to the presence of this parameter, the choice of model specification for the response surface regressions used to obtain the numerical distribution functions is more involved than is usually the case. We deal with model uncertainty by model averaging rather than by model selection. We make available a computer program which, given the dimension of the problem, q, and a value of b, provides either a set of critical values or the asymptotic P‐value for any value of the likelihood ratio statistic. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Y. Takagi 《Metrika》2010,71(1):17-31
We discuss the problem of testing the non-inferiority of a new treatment compared with several standard ones in a survival model where the underlying distribution is exponential and censoring times are fixed. We derive the asymptotic distribution of the likelihood ratio statistic for k-samples. We construct a testing procedure with asymptotic size α based on the likelihood ratio statistic.  相似文献   

4.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

5.
We present a nonparametric study of current status data in the presence of death. Such data arise from biomedical investigations in which patients are examined for the onset of a certain disease, for example, tumor progression, but may die before the examination. A key difference between such studies on human subjects and the survival–sacrifice model in animal carcinogenicity experiments is that, due to ethical and perhaps technical reasons, deceased human subjects are not examined, so that the information on their disease status is lost. We show that, for current status data with death, only the overall and disease‐free survival functions can be identified, whereas the cumulative incidence of the disease is not identifiable. We describe a fast and stable algorithm to estimate the disease‐free survival function by maximizing a pseudo‐likelihood with plug‐in estimates for the overall survival rates. It is then proved that the global rate of convergence for the nonparametric maximum pseudo‐likelihood estimator is equal to Op(n?1/3) or the convergence rate of the estimated overall survival function, whichever is slower. Simulation studies show that the nonparametric maximum pseudo‐likelihood estimators are fairly accurate in small‐ to medium‐sized samples. Real data from breast cancer studies are analyzed as an illustration.  相似文献   

6.
Hotelling's T 2 statistic is an important tool for inference about the center of a multivariate normal population. However, hypothesis tests and confidence intervals based on this statistic can be adversely affected by outliers. Therefore, we construct an alternative inference technique based on a statistic which uses the highly robust MCD estimator [9] instead of the classical mean and covariance matrix. Recently, a fast algorithm was constructed to compute the MCD [10]. In our test statistic we use the reweighted MCD, which has a higher efficiency. The distribution of this new statistic differs from the classical one. Therefore, the key problem is to find a good approximation for this distribution. Similarly to the classical T 2 distribution, we obtain a multiple of a certain F-distribution. A Monte Carlo study shows that this distribution is an accurate approximation of the true distribution. Finally, the power and the robustness of the one-sample test based on our robust T 2 are investigated through simulation.  相似文献   

7.
A test statistic is considered for testing a hypothesis for the mean vector for multivariate data, when the dimension of the vector, p, may exceed the number of vectors, n, and the underlying distribution need not necessarily be normal. With n,p→∞, and under mild assumptions, but without assuming any relationship between n and p, the statistic is shown to asymptotically follow a chi‐square distribution. A by product of the paper is the approximate distribution of a quadratic form, based on the reformulation of the well‐known Box's approximation, under high‐dimensional set up. Using a classical limit theorem, the approximation is further extended to an asymptotic normal limit under the same high dimensional set up. The simulation results, generated under different parameter settings, are used to show the accuracy of the approximation for moderate n and large p.  相似文献   

8.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   

9.
The distribution of a functional of two correlated vector‐Brownian motions is approximated by a Gamma distribution. This functional represents the limiting distribution for cointegration tests with stationary exogenous regressors, but also for cointegration tests based on a non‐Gaussian likelihood. The approximation is accurate, fast and easy to use in comparison with both tabulated critical values and simulated p‐values. The proposed procedure is applied to a UK model investigating purchasing power parity. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
We study the panel dynamic ordinary least square (DOLS) estimator of a homogeneous cointegration vector for a balanced panel of N individuals observed over T time periods. Allowable heterogeneity across individuals include individual‐specific time trends, individual‐specific fixed effects and time‐specific effects. The estimator is fully parametric, computationally convenient, and more precise than the single equation estimator. For fixed N as T→∞, the estimator converges to a function of Brownian motions and the Wald statistic for testing a set of s linear constraints has a limiting χ2(s) distribution. The estimator also has a Gaussian sequential limit distribution that is obtained first by letting T→∞ and then letting N→∞. In a series of Monte‐Carlo experiments, we find that the asymptotic distribution theory provides a reasonably close approximation to the exact finite sample distribution. We use panel DOLS to estimate coefficients of the long‐run money demand function from a panel of 19 countries with annual observations that span from 1957 to 1996. The estimated income elasticity is 1.08 (asymptotic s.e. = 0.26) and the estimated interest rate semi‐elasticity is ?0.02 (asymptotic s.e. = 0.01).  相似文献   

11.
We consider estimation and testing of linkage equilibrium from genotypic data on a random sample of sibs, such as monozygotic and dizygotic twins. We compute the maximum likelihood estimator with an EM‐algorithm and a likelihood ratio statistic that takes the family structure into account. As we are interested in applying this to twin data we also allow observations on single children, so that monozygotic twins can be included. We allow non‐zero recombination fraction between the loci of interest, so that linkage disequilibrium between both linked and unlinked loci can be tested. The EM‐algorithm for computing the maximum likelihood estimator of the haplotype frequencies and the likelihood ratio test‐statistic, are described in detail. It is shown that the usual estimators of haplotype frequencies based on ignoring that the sibs are related are inefficient, and the likelihood ratio test for testing that the loci are in linkage disequilibrium.  相似文献   

12.
The negativity of the substitution matrix implies that its latent roots are non-positive. When inequality restrictions are tested, standard test statistics such as a likelihood ratio or a Wald test are not X2-distributed in large samples. We propose a Wald test for testing the negativity of the substitution matrix. The asymptotic distribution of the statistic is a mixture of X2-distributions. The Wald test provides an exact critical value for a given significance level. The problems involved in computing the exact critical value can be avoided by using the upper and lower bound critical values derived by Kodde and Palm (1986). Finally the methods are applied to the empirical results obtained by Barten and Geyskens (1975).  相似文献   

13.
In this paper, we use the local influence method to study a vector autoregressive model under Students t‐distributions. We present the maximum likelihood estimators and the information matrix. We establish the normal curvature diagnostics for the vector autoregressive model under three usual perturbation schemes for identifying possible influential observations. The effectiveness of the proposed diagnostics is examined by a simulation study, followed by our data analysis using the model to fit the weekly log returns of Chevron stock and the Standard & Poor's 500 Index as an application.  相似文献   

14.
A test statistic is developed for making inference about a block‐diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N ? 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher‐order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.  相似文献   

15.
We review some first‐order and higher‐order asymptotic techniques for M‐estimators, and we study their stability in the presence of data contaminations. We show that the estimating function (ψ) and its derivative with respect to the parameter play a central role. We discuss in detail the first‐order Gaussian density approximation, saddlepoint density approximation, saddlepoint test, tail area approximation via the Lugannani–Rice formula and empirical saddlepoint density approximation (a technique related to the empirical likelihood method). For all these asymptotics, we show that a bounded ψ (in the Euclidean norm) and a bounded (e.g. in the Frobenius norm) yield stable inference in the presence of data contamination. We motivate and illustrate our findings by theoretical and numerical examples about the benchmark case of one‐dimensional location model.  相似文献   

16.
Xiuli Wang  Gaorong Li  Lu Lin 《Metrika》2011,73(2):171-185
In this paper, we apply empirical likelihood method to study the semi-parametric varying-coefficient partially linear errors-in-variables models. Empirical log-likelihood ratio statistic for the unknown parameter β, which is of primary interest, is suggested. We show that the proposed statistic is asymptotically standard chi-square distribution under some suitable conditions, and hence it can be used to construct the confidence region for the parameter β. Some simulations indicate that, in terms of coverage probabilities and average lengths of the confidence intervals, the proposed method performs better than the least-squares method. We also give the maximum empirical likelihood estimator (MELE) for the unknown parameter β, and prove the MELE is asymptotically normal under some suitable conditions.  相似文献   

17.
Ten empirical models of travel behavior are used to measure the variability of structural equation model goodness-of-fit as a function of sample size, multivariate kurtosis, and estimation technique. The estimation techniques are maximum likelihood, asymptotic distribution free, bootstrapping, and the Mplus approach. The results highlight the divergence of these techniques when sample sizes are small and/or multivariate kurtosis high. Recommendations include using multiple estimation techniques and, when sample sizes are large, sampling the data and reestimating the models to test both the robustness of the specifications and to quantify, to some extent, the large sample bias inherent in the χ 2 test statistic.  相似文献   

18.
Many phenomena in the life sciences can be analyzed by using a fixed design regression model with a regression function m that exhibits a crossing‐point in the following sense: the regression function runs below or above its mean level, respectively, according as the input variable lies to the left or to the right of that crossing‐point, or vice versa. We propose a non‐parametric estimator and show weak and strong consistency as long as the crossing‐point is unique. It is defined as maximizing point arg max of a certain marked empirical process. For testing the hypothesis H0 that the regression function m actually is constant (no crossing‐point), a decision rule is designed for the specific alternative H1 that m possesses a crossing‐point. The pertaining test‐statistic is the ratio max/argmax of the maximum value and the maximizing point of the marked empirical process. Under the hypothesis the ratio converges in distribution to the corresponding ratio of a reflected Brownian bridge, for which we derive the distribution function. The test is consistent on the whole alternative and superior to the corresponding Kolmogorov–Smirnov test, which is based only on the maximal value max. Some practical examples of possible applications are given where a certain study about dental phobia is discussed in more detail.  相似文献   

19.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

20.
A frequently occurring problem is to find the maximum likelihood estimation (MLE) of p subject to pC (CP the probability vectors in R k ). The problem has been discussed by many authors and they mainly focused when p is restricted by linear constraints or log-linear constraints. In this paper, we construct the relationship between the the maximum likelihood estimation of p restricted by pC and EM algorithm and demonstrate that the maximum likelihood estimator can be computed through the EM algorithm (Dempster et al. in J R Stat Soc Ser B 39:1–38, 1997). Several examples are analyzed by the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号