首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hinkley (1977) derived two tests for testing the mean of a normal distribution with known coefficient of variation (c.v.) for right alternatives. They are the locally most powerful (LMP) and the conditional tests based on the ancillary statistic for μ. In this paper, the likelihood ratio (LR) and Wald tests are derived for the one‐ and two‐sided alternatives, as well as the two‐sided version of the LMP test. The performances of these tests are compared with those of the classical t, sign and Wilcoxon signed rank tests. The latter three tests do not use the information on c.v. Normal approximation is used to approximate the null distribution of the test statistics except for the t test. Simulation results indicate that all the tests maintain the type‐I error rates, that is, the attained level is close to the nominal level of significance of the tests. The power functions of the tests are estimated through simulation. The power comparison indicates that for one‐sided alternatives the LMP test is the best test whereas for the two‐sided alternatives the LR or the Wald test is the best test. The t, sign and Wilcoxon signed rank tests have lower power than the LMP, LR and Wald tests at various alternative values of μ. The power difference is quite large in several simulation configurations. Further, it is observed that the t, sign and Wilcoxon signed rank tests have considerably lower power even for the alternatives which are far away from the null hypothesis when the c.v. is large. To study the sensitivity of the tests for the violation of the normality assumption, the type I error rates are estimated on the observations of lognormal, gamma and uniform distributions. The newly derived tests maintain the type I error rates for moderate values of c.v.  相似文献   

2.
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

3.
This paper deals with the issue of testing hypotheses in symmetric and log‐symmetric linear regression models in small and moderate‐sized samples. We focus on four tests, namely, the Wald, likelihood ratio, score, and gradient tests. These tests rely on asymptotic results and are unreliable when the sample size is not large enough to guarantee a good agreement between the exact distribution of the test statistic and the corresponding chi‐squared asymptotic distribution. Bartlett and Bartlett‐type corrections typically attenuate the size distortion of the tests. These corrections are available in the literature for the likelihood ratio and score tests in symmetric linear regression models. Here, we derive a Bartlett‐type correction for the gradient test. We show that the corrections are also valid for the log‐symmetric linear regression models. We numerically compare the various tests and bootstrapped tests, through simulations. Our results suggest that the corrected and bootstrapped tests exhibit type I probability error closer to the chosen nominal level with virtually no power loss. The analytically corrected tests as well as the bootstrapped tests, including the Bartlett‐corrected gradient test derived in this paper, perform with the advantage of not requiring computationally intensive calculations. We present a real data application to illustrate the usefulness of the modified tests.  相似文献   

4.
In a cross‐section where the initial distribution of observations differs from the steady‐state distribution and initial values matter, convergence is best measured in terms of σ‐convergence over a fixed time period. For this setting, we propose a new simple Wald test for conditional σ‐convergence. According to our Monte Carlo simulations, this test performs well and its power is comparable with the available tests of unconditional convergence. We apply two versions of the test to conditional convergence in the size of European manufacturing firms. The null hypothesis of no convergence is rejected for all country groups, most single economies, and for younger firms of our sample of 49,646 firms.  相似文献   

5.
In this paper, Bayesian estimation of log odds ratios over R × C and 2 × 2 × K contingency tables is considered, which is practically reasonable in the presence of prior information. Likelihood functions for log odds ratios are derived for each table structure. A prior specification strategy is proposed. Posterior inferences are drawn using Gibbs sampling and Metropolis–Hastings algorithm. Two numerical examples are given to illustrate the matters argued.  相似文献   

6.
The paper develops a novel testing procedure for hypotheses on deterministic trends in a multivariate trend stationary model. The trends are estimated by the OLS estimator and the long run variance (LRV) matrix is estimated by a series type estimator with carefully selected basis functions. Regardless of whether the number of basis functions K is fixed or grows with the sample size, the Wald statistic converges to a standard distribution. It is shown that critical values from the fixed-K asymptotics are second-order correct under the large-K asymptotics. A new practical approach is proposed to select K that addresses the central concern of hypothesis testing: the selected smoothing parameter is testing-optimal in that it minimizes the type II error while controlling for the type I error. Simulations indicate that the new test is as accurate in size as the nonstandard test of Vogelsang and Franses (2005) and as powerful as the corresponding Wald test based on the large-K asymptotics. The new test therefore combines the advantages of the nonstandard test and the standard Wald test while avoiding their main disadvantages (power loss and size distortion, respectively).  相似文献   

7.
We revisit the methodology and historical development of subsampling, and then explore in detail its use in hypothesis testing, an area which has received surprisingly modest attention. In particular, the general set‐up of a possibly high‐dimensional parameter with data from K populations is explored. The role of centring the subsampling distribution is highlighted, and it is shown that hypothesis testing with a data‐centred subsampling distribution is more powerful. In addition we demonstrate subsampling’s ability to handle a non‐standard Behrens–Fisher problem, i.e., a comparison of the means of two or more populations which may possess not only different and possibly infinite variances, but may also possess different distributions. However, our formulation is general, permitting even functional data and/or statistics. Finally, we provide theory for K ‐ sample U ‐ statistics that helps establish the asymptotic validity of subsampling confidence intervals and tests in this very general setting.  相似文献   

8.
This paper illustrates the pitfalls of the conventional heteroskedasticity and autocorrelation robust (HAR) Wald test and the advantages of new HAR tests developed by Kiefer and Vogelsang in 2005 and by Phillips, Sun and Jin in 2003 and 2006. The illustrations use the 1993 Fama–French three‐factor model. The null that the intercepts are zero is tested for 5‐year, 10‐year and longer sub‐periods. The conventional HAR test with asymptotic P‐values rejects the null for most 5‐year and 10‐year sub‐periods. By contrast, the null is not rejected by the new HAR tests. This conflict is explained by showing that inferences based on the conventional HAR test are misleading for the sample sizes used in this application. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
A simulation study was conducted to investigate the effect of non normality and unequal variances on Type I error rates and test power of the classical factorial anova F‐test and different alternatives, namely rank transformation procedure (FR), winsorized mean (FW), modified mean (FM) and permutation test (FP) for testing interaction effects. Simulation results showed that as long as no significant deviation from normality and homogeneity of the variances exists, generally all of the tests displayed similar results. However, if there is significant deviation from the assumptions, the other tests are observed to be affected at considerably high levels except FR and FP tests. As a result, when the assumptions of factorial anova F‐test are not met or, in the case those assumptions are not tested whether met, it can be concluded that using FR and FP tests is more suitable than the classical factorial anova F‐test.  相似文献   

10.
Many phenomena in the life sciences can be analyzed by using a fixed design regression model with a regression function m that exhibits a crossing‐point in the following sense: the regression function runs below or above its mean level, respectively, according as the input variable lies to the left or to the right of that crossing‐point, or vice versa. We propose a non‐parametric estimator and show weak and strong consistency as long as the crossing‐point is unique. It is defined as maximizing point arg max of a certain marked empirical process. For testing the hypothesis H0 that the regression function m actually is constant (no crossing‐point), a decision rule is designed for the specific alternative H1 that m possesses a crossing‐point. The pertaining test‐statistic is the ratio max/argmax of the maximum value and the maximizing point of the marked empirical process. Under the hypothesis the ratio converges in distribution to the corresponding ratio of a reflected Brownian bridge, for which we derive the distribution function. The test is consistent on the whole alternative and superior to the corresponding Kolmogorov–Smirnov test, which is based only on the maximal value max. Some practical examples of possible applications are given where a certain study about dental phobia is discussed in more detail.  相似文献   

11.
In their advocacy of the rank‐transformation (RT) technique for analysis of data from factorial designs, Mende? and Yi?it (Statistica Neerlandica, 67, 2013, 1–26) missed important analytical studies identifying the statistical shortcomings of the RT technique, the recommendation that the RT technique not be used, and important advances that have been made for properly analyzing data in a non‐parametric setting. Applied data analysts are at risk of being misled by Mende? and Yi?it, when statistically sound techniques are available for the proper non‐parametric analysis of data from factorial designs. The appropriate methods express hypotheses in terms of normalized distribution functions, and the test statistics account for variance heterogeneity.  相似文献   

12.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

13.
In an influential paper Pesaran (‘A simple panel unit root test in presence of cross‐section dependence’, Journal of Applied Econometrics, Vol. 22, pp. 265–312, 2007) proposes two unit root tests for panels with a common factor structure. These are the CADF and CIPS test statistics, which are amongst the most popular test statistics in the literature. One feature of these statistics is that their limiting distributions are highly non‐standard, making for relatively complicated implementation. In this paper, we take this feature as our starting point to develop modified CADF and CIPS test statistics that support standard chi‐squared and normal inference.  相似文献   

14.
15.
The negativity of the substitution matrix implies that its latent roots are non-positive. When inequality restrictions are tested, standard test statistics such as a likelihood ratio or a Wald test are not X2-distributed in large samples. We propose a Wald test for testing the negativity of the substitution matrix. The asymptotic distribution of the statistic is a mixture of X2-distributions. The Wald test provides an exact critical value for a given significance level. The problems involved in computing the exact critical value can be avoided by using the upper and lower bound critical values derived by Kodde and Palm (1986). Finally the methods are applied to the empirical results obtained by Barten and Geyskens (1975).  相似文献   

16.
The t regression models provide a useful extension of the normal regression models for datasets involving errors with longer-than-normal tails. Homogeneity of variances (if they exist) is a standard assumption in t regression models. However, this assumption is not necessarily appropriate. This paper is devoted to tests for heteroscedasticity in general t linear regression models. The asymptotic properties, including asymptotic Chi-square and approximate powers under local alternatives of the score tests, are studied. Based on the modified profile likelihood (Cox and Reid in J R Stat Soc Ser B 49(1):1–39, 1987), an adjusted score test for heteroscedasticity is developed. The properties of the score test and its adjustment are investigated through Monte Carlo simulations. The test methods are illustrated with land rent data (Weisberg in Applied linear regression. Wiley, New York, 1985). The project supported by NSFC 10671032, China, and a grant (HKBU2030/07P) from the Grant Council of Hong Kong, Hong Kong, China.  相似文献   

17.
We show that the minimal forward (reverse) recursive unit tests of Banerjee, Lumsdaine and Stock [Journal of Business and Economics Statistics (1992) Vol. 10, pp. 271–288] are consistent against the alternative of a change in persistence from I(0) to I(1) [I(1) to I(0)]. However, these statistics are also shown to diverge for series which are I(0) throughout. Consequently, a rejection by these tests does not necessarily imply a change in persistence. We propose a further test, based on the ratio of these statistics, which is consistent against changes either from I(0) to I(1), or vice versa, yet does not over‐reject against constant I(0) series. Consistent breakpoint estimators are proposed.  相似文献   

18.
We compare the powers of five tests of the coefficient on a single endogenous regressor in instrumental variables regression. Following Moreira [2003, A conditional likelihood ratio test for structural models. Econometrica 71, 1027–1048], all tests are implemented using critical values that depend on a statistic which is sufficient under the null hypothesis for the (unknown) concentration parameter, so these conditional tests are asymptotically valid under weak instrument asymptotics. Four of the tests are based on k-class Wald statistics (two-stage least squares, LIML, Fuller's [Some properties of a modification of the limited information estimator. Econometrica 45, 939–953], and bias-adjusted TSLS); the fifth is Moreira's (2003) conditional likelihood ratio (CLR) test. The heretofore unstudied conditional Wald (CW) tests are found to perform poorly, compared to the CLR test: in many cases, the CW tests have almost no power against a wide range of alternatives. Our analysis is facilitated by a new algorithm, presented here, for the computation of the asymptotic conditional p-value of the CLR test.  相似文献   

19.
Permutation tests for serial independence using three different statistics based on empirical distributions are proposed. These tests are shown to be consistent under the alternative of m‐dependence and are all simple to perform in practice. A small simulation study demonstrates that the proposed tests have good power in small samples. The tests are then applied to Canadian gross domestic product (GDP data), corroborating the random‐walk hypothesis of GDP growth.  相似文献   

20.
In the analysis of variance (Anova ) the use of orthogonal contrasts is quite common and is a traditional topic in many basic Anova courses. Similar ideas apply to rank tests. In this paper we present a simple and general method that allows an orthogonal contrast decomposition of rank test statistics such as the Kruskal‐Wallis, Friedman and Durbin statistics. The components of the test statistics are informative, particularly when ordered alternatives are of interest. The method can handle ties, and null distributions are readily available. Most of the methods are not new, but the way we present them is. Moreover, our formulation makes it easier to better understand and interpret the tests when the traditional location‐shift assumption does not hold. The methods are illustrated using several data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号