首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 333 毫秒
1.
P. A. Lee  S. H. Ong 《Metrika》1986,33(1):1-28
Summary Four bivariate generalisations (Type I–IV) of the non-central negative binomial distribution (Ong/Lee) are considered. The Type I generalisation is constructed using the latent structure model scheme (Goodman) while the Type II generalisation arises from a variation of this scheme. The Type III generalisation is formed by using the method of random elements in common (Mardia). The Type IV is an extension of the Type I generalisation. Properties of these bivariate distributions including joint central and factorial moments are discussed; several recurrence formulae of the probabilities are given. An application to the childhood accident data of Mellinger et al. is considered with the precision of the Type I maximum likelihood estimates computed.  相似文献   

2.
Sequential tests to decide among three binomial probabilities are needed in many situations, such as acceptance sampling used to determine the proportion of defective items and presence and absence sampling to decide whether pest species are causing economic damage to a crop such as corn. Approximate error probabilities associated with Armitage's (1950, JRSS B) method of simultaneously conducting three sequential probability ratio tests (SPRTs) are derived for the binomial distribution. These approximations provide a basis for adjusting the error rates used to establish the individual SPRTs so that the desired overall error rates are attained. Monte Carlo simulation is used to evaluate the revised procedure. Received: September 1998  相似文献   

3.
The statistical power and Type I error rate of several homogeneity tests, usually applied in meta-analysis, are compared using Monte Carlo simulation: (1) The chi-square test applied to standardized mean differences, correlation coefficients, and Fisher's r-to-Z transformations, and (2) S&H-75 (and 90 percent) procedure applied to standardized mean differences and correlation coefficients. Chi-square tests adjusted correctly Type I error rates to the nominal significance level while the S&H procedures showed higher rates; consequently, the S&H procedures presented greater statistical power. In all conditions, the statistical power was very low, particularly when the sample had few studies, small sample sizes, and presented short differences between the parametric effect sizes. Finally, the criteria for selecting homogeneity tests are discussed.  相似文献   

4.
Summary The paper deals with the type I error probabilities when a single outlier test is applied repeatedly to a sample. In the case in which multiple outliers are present, expressions for the power loss due to masking are derived.  相似文献   

5.
孙成霖 《价值工程》2010,29(6):39-39
假设检验是统计推断的内容之一,统计推断在体育统计学中的地位也十分重要。在假设检验中存在两类错误。在很多时候,我们往往只注意第一类错误的控制,而对于第二类错误经常不考虑。其实,对于第二类错误的控制也是十分必要的。本文对于两类错误的成因以及如何控制第二类错误进行了探讨,希望对于第二类错误的控制提出一些解决的方法。  相似文献   

6.
S. H. Ong  P. A. Lee 《Metrika》1986,33(1):29-46
Summary Another bivariate generalisation (Type V) of the non-central negative binomial distribution is considered. This generalisation is constructed (i) as a latent structure model; (ii) as an extension of an accident proneness model investigated by Edwards/Gurland (1961); and (iii) as a reversible stochastic counter model. The third construction gives, as a result, an apparently new formulation of the Edwards/Gurland model. The probabilities, moments, recurrence formulas and some properties are given. An application to the data used by Holgate (1966) is considered.  相似文献   

7.
Hypothesis testing on cointegrating vectors based on the asymptotic distributions of the test statistics are known to suffer from severe small sample size distortion. In this paper an alternative bootstrap procedure is proposed and evaluated through a Monte Carlo experiment, finding that the Type I errors are close to the nominal signficance levels but power might be not entirely adequate. It is then shown that a combined test based on the outcomes of both the asymptotic and the bootstrap tests will have both correct size and low Type II error, therefore improving the currently available procedures.  相似文献   

8.
Parametric stochastic frontier models yield firm-level conditional distributions of inefficiency that are truncated normal. Given these distributions, how should one assess and rank firm-level efficiency? This study compares the techniques of estimating (a) the conditional mean of inefficiency and (b) probabilities that firms are most or least efficient. Monte Carlo experiments suggest that the efficiency probabilities are easier to estimate (less noisy) in terms of mean absolute percent error when inefficiency has large variation across firms. Along the way we tackle some interesting problems associated with simulating and assessing estimator performance in the stochastic frontier model.  相似文献   

9.
This paper studies the Type I error rate obtained using the Breslow-Day (BD) test to detect Nonuniform Differential Item Functioning (NUDIF) in a short test when the average ability of one group is significantly higher than that of the other. The performance is compared with the logistic regression (LR) and the standard Mantel-Haenszel procedure (MH). Responses to a 20-item test were simulated without Differential Item Functioning (DIF) according to the three-parameter logistic model. The manipulated factors were sample size and item parameters. The design yielded 40 conditions that were replicated 50 times and the false positive rate at a 5% significance level obtained with the three methods was recorded for each condition. In most cases, BD performed better than LR and MH in terms of proneness to Type I error. With the BD test, the Type I error rate was similar to the nominal one when the item with the highest discrimination and difficulty parameters in the case of equally sized groups was excluded from the goodness-of-fit to the binomial distribution (number of false positives among the fifty replications of a Bernoulli variable with parameter equal to 0.05).  相似文献   

10.
We study the relationship between wealth and labour market transitions. A lifecycle model, in which individuals are faced by uncertainty about the availability of jobs, serves as a basis for a reduced‐form specification for the probabilities of labour market transitions, which depend on wealth according to the model. Theory implies a negative effect of wealth on the probability of becoming or staying employed. This implication is tested for in a reduced‐from model of labour market transitions, in which we allow for random effects, initial conditions, and measurement error in wealth. Elasticities of transitions probabilities with respect to wealth are presented. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

11.
A probabilistic forecast is the estimated probability with which a future event will occur. One interesting feature of such forecasts is their calibration, or the match between the predicted probabilities and the actual outcome probabilities. Calibration has been evaluated in the past by grouping probability forecasts into discrete categories. We show here that we can do this without discrete groupings; the kernel estimators that we use produce efficiency gains and smooth estimated curves relating the predicted and actual probabilities. We use such estimates to evaluate the empirical evidence on the calibration error in a number of economic applications, including the prediction of recessions and inflation, using both forecasts made and stored in real time and pseudo-forecasts made using the data vintage available at the forecast date. The outcomes are evaluated using both first-release outcome measures and subsequent revised data. We find substantial evidence of incorrect calibration in professional forecasts of recessions and inflation from the SPF, as well as in real-time inflation forecasts from a variety of output gap models.  相似文献   

12.
Bitcoin (BTC), as the dominant cryptocurrency, has attracted tremendous attention lately due to its excessive volatility. This paper proposes the time-varying transition probability Markov-switching GARCH (TV-MSGARCH) models incorporated with BTC daily trading volume and daily Google searches singly and jointly as exogenous variables to model the volatility dynamics of BTC return series. Extensive comparisons are carried out to evaluate the modelling performances of the proposed model with the benchmark models such as GARCH, GJRGARCH, threshold GARCH, constant transition probability MSGARCH and MSGJRGARCH. Results reveal that the TV-MSGARCH models with skewed and fat-tailed distribution predominate other models for the in-sample model fitting based on Akaike information criterion and other benchmark criteria. Furthermore, it is found that the TV-MSGARCH model with BTC daily trading volume and student-t error distribution offers the best out-of-sample forecast evaluated based on the mean square error loss function using Hansen’s model confidence set. Filardo’s weighted transition probabilities are also computed and the results show the existence of time-varying effect on transition probabilities. Lastly, different levels of long and short positions of value-at-risk and the expected shortfall forecasts based on MSGARCH, MSGJRGARCH and TV-MSGARCH models are also examined.  相似文献   

13.
This study examined the performance of two alternative estimation approaches in structural equation modeling for ordinal data under different levels of model misspecification, score skewness, sample size, and model size. Both approaches involve analyzing a polychoric correlation matrix as well as adjusting standard error estimates and model chi-squared, but one estimates model parameters with maximum likelihood and the other with robust weighted least-squared. Relative bias in parameter estimates and standard error estimates, Type I error rate, and empirical power of the model test, where appropriate, were evaluated through Monte Carlo simulations. These alternative approaches generally provided unbiased parameter estimates when the model was correctly specified. They also provided unbiased standard error estimates and adequate Type I error control in general unless sample size was small and the measured variables were moderately skewed. Differences between the methods in convergence problems and the evaluation criteria, especially under small sample and skewed variable conditions, were discussed.  相似文献   

14.
We introduce tests for finite-sample linear regressions with heteroskedastic errors. The tests are exact, i.e., they have guaranteed type I error probabilities when bounds are known on the range of the dependent variable, without any assumptions about the noise structure. We provide upper bounds on probability of type II errors, and apply the tests to empirical data.  相似文献   

15.
Abstract In interpreting the product moment correlation coefficient we can take advantage of the fact that functions of it exist which admit interpretation as probabilities of error in certain situations.  相似文献   

16.
It is well known that the maximum likelihood estimator (MLE) is inadmissible when estimating the multidimensional Gaussian location parameter. We show that the verdict is much more subtle for the binary location parameter. We consider this problem in a regression framework by considering a ridge logistic regression (RR) with three alternative ways of shrinking the estimates of the event probabilities. While it is shown that all three variants reduce the mean squared error (MSE) of the MLE, there is at the same time, for every amount of shrinkage, a true value of the location parameter for which we are overshrinking, thus implying the minimaxity of the MLE in this family of estimators. Little shrinkage also always reduces the MSE of individual predictions for all three RR estimators; however, only the naive estimator that shrinks toward 1/2 retains this property for any generalized MSE (GMSE). In contrast, for the two RR estimators that shrink toward the common mean probability, there is always a GMSE for which even a minute amount of shrinkage increases the error. These theoretical results are illustrated on a numerical example. The estimators are also applied to a real data set, and practical implications of our results are discussed.  相似文献   

17.
The paper considers the problem of discriminating between the autoregressive forms of a Koyck distributed lag model and a regression model with autocorrelated distrubances. Several interpretations of an ad hoc rule-of-thumb suggested by Griliches are compared with Bayesian posterior odds analysis in a Monte Carlo experiment. The Bayesian analysis is generally superior to the rules-of-thumb, the latter exhibiting large probabilities of type I error, and low power. The rules-of-thumb excessively favour the distributed lag model, while the Bayesian method is free from such bias. All methods improve with increased sample size.  相似文献   

18.
Monte Carlo methods are used to investigate the relationship between the power of different pretests for autocorrelation, and the Type I error and power of the significance test for a resulting two-stage estimate of the slope parameter in a simple regression. Our results suggest it may be preferable to always transform without pretesting. Moreover we find little room for improvement in the Type I errors and power of two-stage estimators using existing pretests for autocorrelation, compared with the results obtained given perfect knowledge about when to transform (i.e., given a perfect pretest). Rather, researchers should seek better estimators of the transformation parameter itself.  相似文献   

19.
Several multiple comparison procedures (MCPs)were compared for their rates of Type I error and fortheir ability to detect true pairwise differencesamong means when independence of observationsassumption were not satisfied. Monte Carlo resultsshowed that, if independence is not met, none of theprocedures maintain controlled at the chosennominal level, neither using error rate per comparisonor the error rate experimentwise. However, once thedependence of the data was corrected the Type I errorrate was maintained at the same level as when thecorrelation was zero in all the procedures, except forthe Fisher's (1935) least significant differenceprocedure (LSD) and Hayter's (1986) two-stagemodified LSD procedure (FH). At the sametime, conform the correlation increased by a smallamount the power rates also, specially, when the powerwas examined using per-pair power.  相似文献   

20.
Standard estimators for the binomial logit model and for the multinomial logit model allow for an error arising from the use of relative frequencies instead of the true probabilities as the dependent variable. Recently Amemiya and Nold (1975) have considered the effect of the presence of an additional specification error in the binomial logit model and have proposed a modified logit estimation scheme to take the additional error variance into account. This paper extends their idea to the multinomial logit model and proposes an estimator that is consistent and asymptotically more efficient than the standard multinomial logit estimator. The paper presents a comparison of the results of applying the new estimator and existing estimators to a logit model for the choice of automobile ownership in the United States.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号