首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many manufacturing and service industries, the quality department of the organization works continuously to ensure that the mean or location of the process is close to the target value. In order to understand the process, it is necessary to provide numerical statements of the processes that are being investigated. That is why the researcher needs to check the validity of the hypotheses that are concerned with some physical phenomena. It is usually assumed that the collected data behave well. However, sometimes the data may contain outliers. The presence of one or more outliers might seriously distort the statistical inference. Since the sample mean is very sensitive to outliers, this research will use the smooth adaptive (SA) estimator to estimate the population mean. The SA estimator will be used to construct testing procedures, called smooth adaptive test (SA test), for testing various null hypotheses. A Monte Carlo study is used to simulate the values of the probability of a Type I error and the power of the SA test. This is accomplished by constructing confidence intervals of the process mean by using the SA estimator and bootstrap methods. The SA test will be compared with other tests such as the normal test, t test and a nonparametric statistical method, namely, the Wilcoxon signed-rank test. Also, the cases with and without outliers will be considered. For the right-skewed distributions, the SA test is the best choice. When the population is a right-skewed distribution with one outlier, the SA test controls the probability of a Type I error better than other tests and is recommended.  相似文献   

2.
In this paper, we propose several finite‐sample specification tests for multivariate linear regressions (MLR). We focus on tests for serial dependence and ARCH effects with possibly non‐Gaussian errors. The tests are based on properly standardized multivariate residuals to ensure invariance to error covariances. The procedures proposed provide: (i) exact variants of standard multivariate portmanteau tests for serial correlation as well as ARCH effects, and (ii) exact versions of the diagnostics presented by Shanken ( 1990 ) which are based on combining univariate specification tests. Specifically, we combine tests across equations using a Monte Carlo (MC) test method so that Bonferroni‐type bounds can be avoided. The procedures considered are evaluated in a simulation experiment: the latter shows that standard asymptotic procedures suffer from serious size problems, while the MC tests suggested display excellent size and power properties, even when the sample size is small relative to the number of equations, with normal or Student‐t errors. The tests proposed are applied to the Fama–French three‐factor model. Our findings suggest that the i.i.d. error assumption provides an acceptable working framework once we allow for non‐Gaussian errors within 5‐year sub‐periods, whereas temporal instabilities clearly plague the full‐sample dataset. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
To examine complex relationships among variables, researchers in human resource management, industrial-organizational psychology, organizational behavior, and related fields have increasingly used meta-analytic procedures to aggregate effect sizes across primary studies to form meta-analytic correlation matrices, which are then subjected to further analyses using linear models (e.g., multiple linear regression). Because missing effect sizes (i.e., correlation coefficients) and different sample sizes across primary studies can occur when constructing meta-analytic correlation matrices, the present study examined the effects of missingness under realistic conditions and various methods for estimating sample size (e.g., minimum sample size, arithmetic mean, harmonic mean, and geometric mean) on the estimated squared multiple correlation coefficient (R2) and the power of the significance test on the overall R2 in linear regression. Simulation results suggest that missing data had a more detrimental effect as the number of primary studies decreased and the number of predictor variables increased. It appears that using second-order sample sizes of at least 10 (i.e., independent effect sizes) can improve both statistical power and estimation of the overall R2 considerably. Results also suggest that although the minimum sample size should not be used to estimate sample size, the other sample size estimates appear to perform similarly.  相似文献   

4.
In recent decades several methods have been developed for detecting differential item functioning (DIF), and many studies have aimed to identify both the conditions under which items may or may not be adequate and the factors which affect their power and Type I error. This paper describes a Monte Carlo experiment that was carried out in order to analyse the effect of reference group sample size, focal group sample size and the interaction of the two on the power and Type I error of the Mantel–Haenszel (MH) and Logistic regression (LR) procedures. The data were generated using a three-parameter logistic model, the design was fully-crossed factorial with 12 experimental conditions arising from the crossing of the two main factors, and the dependent variables were power and the rate of false positives calculated across 100 replications. The results enabled the significant factors to be identified and the two statistics to be compared. Practical recommendations are made regarding use of the procedures by psychologists interested in the development and analysis of psychological tests.  相似文献   

5.
Several multiple comparison procedures (MCPs)were compared for their rates of Type I error and fortheir ability to detect true pairwise differencesamong means when independence of observationsassumption were not satisfied. Monte Carlo resultsshowed that, if independence is not met, none of theprocedures maintain controlled at the chosennominal level, neither using error rate per comparisonor the error rate experimentwise. However, once thedependence of the data was corrected the Type I errorrate was maintained at the same level as when thecorrelation was zero in all the procedures, except forthe Fisher's (1935) least significant differenceprocedure (LSD) and Hayter's (1986) two-stagemodified LSD procedure (FH). At the sametime, conform the correlation increased by a smallamount the power rates also, specially, when the powerwas examined using per-pair power.  相似文献   

6.
A simulation study was conducted to investigate the effect of non normality and unequal variances on Type I error rates and test power of the classical factorial anova F‐test and different alternatives, namely rank transformation procedure (FR), winsorized mean (FW), modified mean (FM) and permutation test (FP) for testing interaction effects. Simulation results showed that as long as no significant deviation from normality and homogeneity of the variances exists, generally all of the tests displayed similar results. However, if there is significant deviation from the assumptions, the other tests are observed to be affected at considerably high levels except FR and FP tests. As a result, when the assumptions of factorial anova F‐test are not met or, in the case those assumptions are not tested whether met, it can be concluded that using FR and FP tests is more suitable than the classical factorial anova F‐test.  相似文献   

7.
Hypothesis testing on cointegrating vectors based on the asymptotic distributions of the test statistics are known to suffer from severe small sample size distortion. In this paper an alternative bootstrap procedure is proposed and evaluated through a Monte Carlo experiment, finding that the Type I errors are close to the nominal signficance levels but power might be not entirely adequate. It is then shown that a combined test based on the outcomes of both the asymptotic and the bootstrap tests will have both correct size and low Type II error, therefore improving the currently available procedures.  相似文献   

8.
9.
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

10.
This paper studies the problem of treatment choice between a status quo treatment with a known outcome distribution and an innovation whose outcomes are observed only in a finite sample. I evaluate statistical decision rules, which are functions that map sample outcomes into the planner’s treatment choice for the population, based on regret, which is the expected welfare loss due to assigning inferior treatments. I extend previous work started by Manski (2004) that applied the minimax regret criterion to treatment choice problems by considering decision criteria that asymmetrically treat Type I regret (due to mistakenly choosing an inferior new treatment) and Type II regret (due to mistakenly rejecting a superior innovation) and derive exact finite sample solutions to these problems for experiments with normal, Bernoulli and bounded distributions of outcomes. The paper also evaluates the properties of treatment choice and sample size selection based on classical hypothesis tests and power calculations in terms of regret.  相似文献   

11.
Abstract.  In this paper we review and compare diagnostic tests of cross-section independence in the disturbances of panel regression models. We examine tests based on the sample pairwise correlation coefficient or on its transformations, and tests based on the theory of spacings. The ultimate goal is to shed some light on the appropriate use of existing diagnostic tests for cross-equation error correlation. Our discussion is supported by means of a set of Monte Carlo experiments and a small empirical study on health. Results show that tests based on the average of pairwise correlation coefficients work well when the alternative hypothesis is a factor model with non-zero mean loadings. Tests based on spacings are powerful in identifying various forms of strong cross-section dependence, but have low power when they are used to capture spatial correlation.  相似文献   

12.
For testing the equality of coefficients of a linear regression model under heteroscedasticity, we suggest an F criterion conditioned on the posterior mean of the ratio of standard deviations of error terms in two subsamples. For pairable subsamples, and exact F test is derived. Sampling experiments show that the Chow test differs substantially from the nominal significance level when the two subsample sizes are unequal, and that the F test conditioned on the posterior mean is superior to other tests when sample sizes are small.  相似文献   

13.
This study examined the performance of two alternative estimation approaches in structural equation modeling for ordinal data under different levels of model misspecification, score skewness, sample size, and model size. Both approaches involve analyzing a polychoric correlation matrix as well as adjusting standard error estimates and model chi-squared, but one estimates model parameters with maximum likelihood and the other with robust weighted least-squared. Relative bias in parameter estimates and standard error estimates, Type I error rate, and empirical power of the model test, where appropriate, were evaluated through Monte Carlo simulations. These alternative approaches generally provided unbiased parameter estimates when the model was correctly specified. They also provided unbiased standard error estimates and adequate Type I error control in general unless sample size was small and the measured variables were moderately skewed. Differences between the methods in convergence problems and the evaluation criteria, especially under small sample and skewed variable conditions, were discussed.  相似文献   

14.
Identification in most sample selection models depends on the independence of the regressors and the error terms conditional on the selection probability. All quantile and mean functions are parallel in these models; this implies that quantile estimators cannot reveal any—per assumption non‐existing—heterogeneity. Quantile estimators are nevertheless useful for testing the conditional independence assumption because they are consistent under the null hypothesis. We propose tests of the Kolmogorov–Smirnov type based on the conditional quantile regression process. Monte Carlo simulations show that their size is satisfactory and their power sufficient to detect deviations under plausible data‐generating processes. We apply our procedures to female wage data from the 2011 Current Population Survey and show that homogeneity is clearly rejected. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Unit root tests, seeking mean or trend reversion, are frequently applied to panel data. We show that more powerful variants of commonly applied tests are readily available. Moreover, power gains persist when the modifications are applied to bootstrap procedures that may be employed when cross‐correlation of a rather general sort among individual panel members is suspected. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
This paper studies alternative distributions for the size of price jumps in the S&P 500 index. We introduce a range of new jump-diffusion models and extend popular double-jump specifications that have become ubiquitous in the finance literature. The dynamic properties of these models are tested on both a long time series of S&P 500 returns and a large sample of European vanilla option prices. We discuss the in- and out-of-sample option pricing performance and provide detailed evidence of jump risk premia. Models with double-gamma jump size distributions are found to outperform benchmark models with normally distributed jump sizes.  相似文献   

17.
In the context of multiple treatments for a particular problem or disorder, it is important theoretically and clinically to investigate whether any one treatment is more effective than another. Typically researchers report the results of the comparison of two treatments, and the meta-analytic problem is to synthesize the various comparisons of two treatments to test the omnibus null hypothesis that the true differences of all particular pairs of treatments are zero versus the alternative that there is at least one true nonzero difference. Two tests, one proposed by Wampold et al. (Psychol. Bull. 122:203–215, 1997) based on the homogeneity of effects, and one proposed here based on the distribution of the absolute value of the effects, were investigated. Based on a Monte Carlo simulation, both tests adequately maintained nominal error rates, and both demonstrated adequate power, although the Wampold test was slightly more powerful for non-uniform alternatives. The error rates and power were essentially unchanged in the presence of random effects. The tests were illustrated with a reanalysis of two published meta-analyses (psychotherapy and antidepressants). It is concluded that both tests are viable for testing the omnibus null hypothesis of no treatment differences.  相似文献   

18.
In this paper, we propose a unified approach to generating standardized‐residuals‐based correlation tests for checking GARCH‐type models. This approach is valid in the presence of estimation uncertainty, is robust to various standardized error distributions, and is applicable to testing various types of misspecifications. By using this approach, we also propose a class of power‐transformed‐series (PTS) correlation tests that provides certain robustifications and power extensions to the Box–Pierce, McLeod–Li, Li–Mak, and Berkes–Horváth–Kokoszka tests in diagnosing GARCH‐type models. Our simulation and empirical example show that the PTS correlation tests outperform these existing autocorrelation tests in financial time series analysis. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
Credit ratings are ordinal predictions of the default risk of an obligor. The most commonly used measure for evaluating their predictive accuracy is the Accuracy Ratio, or equivalently, the area under the ROC curve. The disadvantages of these measures are that they treat default as a binary variable, thus neglecting the timing of default events, and they fail to use all of the information available from censored observations. We present an alternative measure which is related to the Accuracy Ratio but does not suffer from these drawbacks. As a second contribution, we study statistical inference for the Accuracy Ratio and the proposed measure in the case of multiple cohorts of obligors with overlapping lifetimes. We derive methods which use more sample information and lead to tests which are more powerful than alternatives which filter just the independent part of the dataset. All procedures are illustrated in the empirical section using a dataset of S&P Credit Ratings.  相似文献   

20.
I introduce a general equilibrium model with active investors and indexers. Indexing causes market segmentation, and the degree of segmentation is a function of the relative wealth of indexers in the economy. Shocks to this relative wealth induce correlated shocks to discount rates of index stocks. The wealthier indexers are, the greater the resulting comovement is. I confirm empirically that S&P 500 stocks comove more with other index stocks and less with non-index stocks, and that changes in passive holdings of S&P 500 stocks predict changes in comovement of index stocks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号