首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

2.
Serious concerns have been raised that false positive findings are widespread in empirical research in business disciplines. This is largely because researchers almost exclusively adopt the ‘p‐value less than 0.05’ criterion for statistical significance; and they are often not fully aware of large‐sample biases which can potentially mislead their research outcomes. This paper proposes that a statistical toolbox (rather than a single hammer) be used in empirical research, which offers researchers a range of statistical instruments, including a range of alternatives to the p‐value criterion such as the Bayesian methods, optimal significance level, sample size selection, equivalence testing and exploratory data analyses. It is found that the positive results obtained under the p‐value criterion cannot stand, when the toolbox is applied to three notable studies in finance.  相似文献   

3.
In the case of two independent samples, it turns out that among the procedures taken in consideration, BOSCHLOO'S technique of raising the nominal level in the standard conditional test as far as admissible performs best in terms of power against almost all alternatives. The computational burden entailed in exact sample size calculation is comparatively modest for both the uniformly most powerful unbiased randomized and the conservative non‐randomized version of the exact Fisher‐type test. Computing these values yields a pair of bounds enclosing the exact sample size required for the Boschloo test, and it seems reasonable to replace the exact value with the middle of the corresponding interval. Comparisons between these mid‐N estimates and the fully exact sample sizes lead to the conclusion that the extra computational effort required for obtaining the latter is mostly dispensable. This holds also true in the case of paired binary data (McNemar setting). In the latter, the level‐corrected score test turns out to be almost as powerful as the randomized uniformly most powerful unbiased test and should be preferred to the McNemar–Boschloo test. The mid‐N rule provides a fairly tight upper bound to the exact sample size for the score test for paired proportions.  相似文献   

4.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

5.
This paper illustrates that, under the null hypothesis of no cointegration, the correlation of p‐values from a single‐equation residual‐based test (i.e., ADF or ) with a system‐based test (trace or maximum eigenvalue) is very low even as the sample size gets large. With data‐generating processes under the null or ‘near’ it, the two types of tests can yield virtually any combination of p‐values regardless of sample size. As a practical matter, we also conduct tests for cointegration on 132 data sets from 34 studies appearing in this Journal and find substantial differences in p‐values for the same data set. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
During the last three decades, integer‐valued autoregressive process of order p [or INAR(p)] based on different operators have been proposed as a natural, intuitive and maybe efficient model for integer‐valued time‐series data. However, this literature is surprisingly mute on the usefulness of the standard AR(p) process, which is otherwise meant for continuous‐valued time‐series data. In this paper, we attempt to explore the usefulness of the standard AR(p) model for obtaining coherent forecasting from integer‐valued time series. First, some advantages of this standard Box–Jenkins's type AR(p) process are discussed. We then carry out our some simulation experiments, which show the adequacy of the proposed method over the available alternatives. Our simulation results indicate that even when samples are generated from INAR(p) process, Box–Jenkins's model performs as good as the INAR(p) processes especially with respect to mean forecast. Two real data sets have been employed to study the expediency of the standard AR(p) model for integer‐valued time‐series data.  相似文献   

7.
In toxicity studies, model mis‐specification could lead to serious bias or faulty conclusions. As a prelude to subsequent statistical inference, model selection plays a key role in toxicological studies. It is well known that the Bayes factor and the cross‐validation method are useful tools for model selection. However, exact computation of the Bayes factor is usually difficult and sometimes impossible and this may hinder its application. In this paper, we recommend to utilize the simple Schwarz criterion to approximate the Bayes factor for the sake of computational simplicity. To illustrate the importance of model selection in toxicity studies, we consider two real data sets. The first data set comes from a study of dietary fortification with carbonyl iron in which the Bayes factor and the cross‐validation are used to determine the number of sub‐populations in a mixture normal model. The second example involves a developmental toxicity study in which the selection of dose–response functions in a beta‐binomial model is explored.  相似文献   

8.
Bayes factors that do not require prior distributions are proposed for testing one parametric model versus another. These Bayes factors are relatively simple to compute, relying only on maximum likelihood estimates, and are Bayes consistent at an exponential rate for nested models even when the smaller model is true. These desirable properties derive from the use of data splitting. Large sample properties, including consistency, of the Bayes factors are derived, and a simulation study explores practical concerns. The methodology is illustrated with civil engineering data involving compressive strength of concrete.  相似文献   

9.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

10.
A test statistic is developed for making inference about a block‐diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N ? 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher‐order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.  相似文献   

11.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

12.
This article presents the empirical Bayes method for estimation of the transition probabilities of a generalized finite stationary Markov chain whose ith state is a multi-way contingency table. We use a log-linear model to describe the relationship between factors in each state. The prior knowledge about the main effects and interactions will be described by a conjugate prior. Following the Bayesian paradigm, the Bayes and empirical Bayes estimators relative to various loss functions are obtained. These procedures are illustrated by a real example. Finally, asymptotic normality of the empirical Bayes estimators are established.  相似文献   

13.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

14.
There has been a substantial debate whether GNP has a unit root. However, statistical tests have had little success in distinguishing between unit‐root and trend‐reverting specifications because of poor statistical properties. This paper develops a new exact small‐sample, pointwise most powerful unit root test that is invariant to the unknown mean and scale of the time series tested, that generates exact small‐sample critical values, powers and p‐values, that has power which approximates the maximum possible power, and that is highly robust to conditional heteroscedasticity. This test decisively rejects the unit root null hypothesis when applied to annual US real GNP and US real per capita GNP series. This paper also develops a modified version of the test to address whether a time series contains a permanent, unit root process in addition to a temporary, stationary process. It shows that if these GNP series contain a unit root process in addition to the stationary process, then it is most likely very small. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

15.
A simultaneous confidence band provides a variety of inferences on the unknown components of a regression model. There are several recent papers using confidence bands for various inferential purposes; see for example, Sun et al. (1999) , Spurrier (1999) , Al‐Saidy et al. (2003) , Liu et al. (2004) , Bhargava & Spurrier (2004) , Piegorsch et al. (2005) and Liu et al. (2007) . Construction of simultaneous confidence bands for a simple linear regression model has a rich history, going back to the work of Working & Hotelling (1929) . The purpose of this article is to consolidate the disparate modern literature on simultaneous confidence bands in linear regression, and to provide expressions for the construction of exact 1 ?α level simultaneous confidence bands for a simple linear regression model of either one‐sided or two‐sided form. We center attention on the three most recognized shapes: hyperbolic, two‐segment, and three‐segment (which is also referred to as a trapezoidal shape and includes a constant‐width band as a special case). Some of these expressions have already appeared in the statistics literature, and some are newly derived in this article. The derivations typically involve a standard bivariate t random vector and its polar coordinate transformation.  相似文献   

16.
17.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

18.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

19.
H. J. Malik 《Metrika》1970,15(1):19-22
Summary Distributions are derived of the product of sample values, the sample geometric mean, the product of two minimum values from sample of unequal size and product ofk minimum values from sample of equal size from aPareto population. The distributions can be conveniently transformed tox 2. Paper presented at the Eastern Regional meeting of the Institute of Mathematical Statistics, Upton, Long Island, New York, April 27–29, 1966. Work done when the author was on the faculty of Western Reserve University, Cleveland, Ohio, U.S.A.  相似文献   

20.
This paper replicates Cheung and Lai (Journal of Business and Economic Studies 1995; 13(3): 277–280), who use response surface analysis to obtain approximate finite‐sample critical values adjusted for lag order and sample size for the augmented Dickey–Fuller test. We obtain results that are quite close to their results. We provide the Ox source code. We also provide a Windows application with a graphical user interface, which makes obtaining custom critical values quite simple. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号