首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
p‐Values are commonly transformed to lower bounds on Bayes factors, so‐called minimum Bayes factors. For the linear model, a sample‐size adjusted minimum Bayes factor over the class of g‐priors on the regression coefficients has recently been proposed (Held & Ott, The American Statistician 70(4), 335–341, 2016). Here, we extend this methodology to a logistic regression to obtain a sample‐size adjusted minimum Bayes factor for 2 × 2 contingency tables. We then study the relationship between this minimum Bayes factor and two‐sided p‐values from Fisher's exact test, as well as less conservative alternatives, with a novel parametric regression approach. It turns out that for all p‐values considered, the maximal evidence against the point null hypothesis is inversely related to the sample size. The same qualitative relationship is observed for minimum Bayes factors over the more general class of symmetric prior distributions. For the p‐values from Fisher's exact test, the minimum Bayes factors do on average not tend to the large‐sample bound as the sample size becomes large, but for the less conservative alternatives, the large‐sample behaviour is as expected.  相似文献   

2.
T 2 charts are used to monitor a process when more than one quality variable associated with process is being observed. Recent studies have shown that the T 2 chart with variable sampling size and sampling interval (VSSI) detects a small shift in the process mean vector faster than the traditional T 2 chart. The paper considers an economic design of the VSSI T 2 chart, in which the expected hourly loss is constructed and regarded as an objective function for optimally determining the design parameters (i.e. the maximum/minimum sample size, the longest/shortest sampling interval, and the warning/action limits) in sampling-and-charting. Furthermore, the effects of process parameters and cost parameters upon the expected hourly loss and design parameters are examined.  相似文献   

3.
This paper examines the pricing decisions of a seller facing an unknown demand function. It is assumed that partial information, in the form of an independent random sample of values, is available. The optimal price for the inferred demand satisfies a consistency property—as the size of the sample increases, the maximum profit and price approach the values for the case where demand is known. The main results deduced here are asymptotics for prices. Prices converge at a rate of O p (n −1/3) with a limit that can be expressed as a functional of a Gaussian process. Implications for the comparison of mechanisms are discussed.   相似文献   

4.
B. M. Bennett 《Metrika》1972,19(1):36-38
Summary The properties of theWilcoxon signed rank sum testW + [Wilcoxon, 1945] for the hypothesis of symmetryH o are discussed for alternativesH toH o. The probability generating function and cumulant generating function ofW + are derived and a limiting form of the distribution is determined.  相似文献   

5.
D. G. Kabe 《Metrika》1967,12(1):155-160
Summary Bush andOlkin (1959) give some results concerning the minimum values of quadratic forms and show some applications of their results to statistics. We generalize some results ofBush andOlkin’s to vector quadratic forms and consider some applications of the results to statistics.  相似文献   

6.
To examine complex relationships among variables, researchers in human resource management, industrial-organizational psychology, organizational behavior, and related fields have increasingly used meta-analytic procedures to aggregate effect sizes across primary studies to form meta-analytic correlation matrices, which are then subjected to further analyses using linear models (e.g., multiple linear regression). Because missing effect sizes (i.e., correlation coefficients) and different sample sizes across primary studies can occur when constructing meta-analytic correlation matrices, the present study examined the effects of missingness under realistic conditions and various methods for estimating sample size (e.g., minimum sample size, arithmetic mean, harmonic mean, and geometric mean) on the estimated squared multiple correlation coefficient (R2) and the power of the significance test on the overall R2 in linear regression. Simulation results suggest that missing data had a more detrimental effect as the number of primary studies decreased and the number of predictor variables increased. It appears that using second-order sample sizes of at least 10 (i.e., independent effect sizes) can improve both statistical power and estimation of the overall R2 considerably. Results also suggest that although the minimum sample size should not be used to estimate sample size, the other sample size estimates appear to perform similarly.  相似文献   

7.
Dr. P. N. Rathie 《Metrika》1972,18(1):216-219
Equivalence of the generalized entropyH β (P, Φ t ) defined in this paper andKapur’s entropy of orderα and typeβ, ie.H α β (P), is established. The results given recently byCampbell follow as special cases. International Conference on System Sciences, Honolulu, January 1968.  相似文献   

8.
Pranab Kumar Sen 《Metrika》1972,18(1):234-237
Summary For independently distributed error components, the asymptotic relative efficiency (A.R.E.) ofFriedman’sx r 2 -tests with respect to the classical analysis of variance test has been studied byElteren andNoether andSen [1967]. The present note extends these results to the case of correlated errors arising in some random-effects or mixed-effects models. Work supported by the U.S. Army Research Office, Durham, Grant DA-ARO-D-31-124-G 746.  相似文献   

9.
The present paper obtains the nonnull distribution of the product moment correlation coefficient r when sample is drawn from a mixture of two bivariate Gaussian distributions. The moments of 1−r 2 have been used to derive the nonnull density of r. Received September 2000  相似文献   

10.
Ulhas J. Dixit 《Metrika》1994,41(1):127-136
The predictive distribution of ther-th order statistics is obtained for the future sample based on the original sample from Weibull distribution in the presence ofk outliers. Next, in the presence ofk outliers two sample case is considered where prediction can be on ther 2-th order statistics in the second sample based on ther 1-th order statistics in the first sample. Finally, extension top-sample case is made for a particular case of predicting minimum in thep-th sample based on minimum in earlier samples. An illustration is provided with simulated samples where minimum is actually predicted in one and two sample cases.  相似文献   

11.
This paper illustrates that, under the null hypothesis of no cointegration, the correlation of p‐values from a single‐equation residual‐based test (i.e., ADF or ) with a system‐based test (trace or maximum eigenvalue) is very low even as the sample size gets large. With data‐generating processes under the null or ‘near’ it, the two types of tests can yield virtually any combination of p‐values regardless of sample size. As a practical matter, we also conduct tests for cointegration on 132 data sets from 34 studies appearing in this Journal and find substantial differences in p‐values for the same data set. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
The Shewhart and the Bonferroni-adjustment R and S chart are usually applied to monitor the range and the standard deviation of a quality characteristic. These charts are used to recognize the process variability of a quality characteristic. The control limits of these charts are constructed on the assumption that the population follows approximately the normal distribution with the standard deviation parameter known or unknown. In this article, we establish two new charts based approximately on the normal distribution. The constant values needed to construct the new control limits are dependent on the sample group size (k) and the sample subgroup size (n). Additionally, the unknown standard deviation for the proposed approaches is estimated by a uniformly minimum variance unbiased estimator (UMVUE). This estimator has variance less than that of the estimator used in the Shewhart and Bonferroni approach. The proposed approaches in the case of the unknown standard deviation, give out-of-control average run length slightly less than the Shewhart approach and considerably less than the Bonferroni-adjustment approach.  相似文献   

13.
Summary LetN=[n ij ] (i=1, …,r;j=1, …,c) be the matrix of observed frequencies in anr×c contingency table fromr possibly different multinomial populations with respective probabilitiesp i =(p i1, …,p ic ).Freeman andHalton have proposed an exact conditional test for the hypothesisH 0 :p i =(p 1, …p c ) of the exact test is derived. Numerical values forβ(p) were previously computed for the special case:r=3,c=2 [Bennett andNakamura, 1964].  相似文献   

14.
Dr. H. Vogt 《Metrika》1970,16(1):206-235
Zusammenfassung Diese Arbeit ist die gekürzte Fassung einer Dissertation, die vom Autor 1968 unter gleichem Titel in Würzburg ver?ffentlicht wurde. Es ist nicht sinnovll, Mittelwerte für zuf?llige Variable auf der Sph?re in der üblichen Weise, wie z. B. für zuf?llige Variable auf der reellen Achse, zu definieren. W. Uhlmann [1964] bediente sich entscheidungstheoretischer Begriffe, um für zirkul?re zuf?llige Variable mittlere Winkel zu definieren, die ihre Richtung unabh?ngig von der Wahl der Null-Richtung beibehalten. Die analoge Invarianzeigenschaft wird für alle hier definierten Mittelwertbegriffe (mittlere Richtung, mittlerer Gro?kreis, Mittelachse, Mittelkreis) gesichert, indem einfache Forderungen an die zu verwendenden Verlustfunktionen gestellt werden. Da als Sch?tzungen für diese Mittelwerte stets die entsprechenden Mittelwerte der empirischen Verteilung auftreten, haben diese auch die gleiche Invarianzeigenschaft. Um die Diskrepanz zwischen einer solchen Sch?tzung und dem zu sch?tzenden Mittelwert zu messen, werden neue Verlustfunktionen eingeführt. Es wird gezeigt, da? alle eingeführten Sch?tzungen bezüglich mindestens einer Verlustfunktion unverf?lscht sind, d. h. der erwartete Verlust wird minimal, wenn wir aus allen in Frage kommenden Objekten gerade den betreffenden Mittelwert gesch?tzt werden lassen. Dieser minimale Verlust wird die Dispersion der Sch?tzung bezüglich dieser Verlustfunktion genannt. Es wird bewiesen, da? alle ermittelten Dispersionen mindestens wien −1/2 gegen Null gehen, wenn n gegen Unendlich strebt.
Summary This paper is shortened from an equally entitled dissertation which has been published by the author in 1968 at Würzburg. For random variables on the sphere it would make no sense to define means in the usual way as it is done e. g. for random variables on the real line. Introducing concepts of decision theory,W. Uhlmann [1964] defined mean angles for circular random variables the direction of which does not depend on the choice of the zero direction. Setting up simple conditions for the loss functions to be used, we ensure that all the means defined in the paper (mean directions, mean great circles, mean axes, mean circles) have the analogous invariance property. The estimators of these means are always the corresponding means of the empirical distribution, defined with respect to the same loss function and therefore they have the invariance property too. To measure the discrepance between an estimator and the estimated mean, new loss functions are introduced. It is shown that all the established estimators are unbiased with respect to at least one loss function, i. e. the expected loss is a minimum, if we take just the mean from all the things in question to be estimated by the regarded estimator. This minimum loss is called the dispersion of the estimator with respect to this loss function. It is proved, that all the calculated dispersions go to zero at least asn −1/2, ifn tends to infinity.
  相似文献   

15.
The problem of invariant estimation of a continuous distribution function is considered under a general loss function. Minimaxity of the minimum risk invariant estimator of a continuous distribution function is proved for any sample size n ≥ 2.  相似文献   

16.
I. Thomsen 《Metrika》1978,25(1):27-35
Summary The values of a variablex are assumed known for all elements in a finite population. Between this variable and another variableY, whose values are registered in a sample survey, there is the usual linear regression relationship. This paper considers problems of design and of estimation of the regression coefficienta and the interceptb. The followingGodambe type theorem is proved: There exists no minimum variance unbiased linear estimator ofa andb. We also derive that the usual estimators ofa andb have minimum variance if attention is restricted to the class of linear estimators unbiased in any given sample.  相似文献   

17.
Ten empirical models of travel behavior are used to measure the variability of structural equation model goodness-of-fit as a function of sample size, multivariate kurtosis, and estimation technique. The estimation techniques are maximum likelihood, asymptotic distribution free, bootstrapping, and the Mplus approach. The results highlight the divergence of these techniques when sample sizes are small and/or multivariate kurtosis high. Recommendations include using multiple estimation techniques and, when sample sizes are large, sampling the data and reestimating the models to test both the robustness of the specifications and to quantify, to some extent, the large sample bias inherent in the χ 2 test statistic.  相似文献   

18.
In this paper, we discuss our application of the Bootstrap method to construct the confidence interval of the diameter for two-dimensional data with circular tolerances in a gauge repeatability and reproducibility study. Factors simulated to validate performance include: the variance component, and sample size. The simulation results show that the Bootstrap method can cover the stated nominal coefficient in most scenarios. There exists a positive correlation between width of confidence intervals and variance components; the width of confidence intervals for diameters is increased when the variance components ([^(s)]x2, [^(s)]y2 or [^(s)]xy2){(\hat{{\sigma}}_x^2, \hat{{\sigma}}_y^2\,{\rm or}\,\hat{{\sigma}}_{xy}^2)} are increased. The coverage proportion is not significantly affected by variance-components. Also, the width of confidence interval for the diameter and coverage proportion is not significantly affected by sample size. One real example based on a nested design is used to demonstrate the application of the proposed method.  相似文献   

19.
Summary When elements of a finite population are sampled with varying probability selection at each draw,Horvitz andThompson [1952] have formulated certain classes of linear estimators to bear on the problem of providing a smaple appraisal of the population total.Horvitz andThompson's T 1 class is an ordered one, which was examined by the present author [1967 b]. For some sampling procedures a best estimator exists for theT 1 class. Subsequently the present author [1967 c] appliedMurthy's technique [Murthy 1967] of unordering an ordered estimator and derived a more efficient estimator. The present paper is concerned with applyingMurthy's technique to theT 1 class itself, and examining the unorderedT 1 class. Curiously enough, it is noted that the condition of unbiasedness is sufficient to completely specify the unorderedT 1 class for the sampling procedure considered here.Research sponsored by Marathwada University, Aurangabad, India; under Grant No. Research-12-68-69/3314-16.  相似文献   

20.
The Binomial CUSUM is used to monitor the fraction defective (p) of a repetitive process, particularly for detecting small to moderate shifts. The number of defectives from each sample is used to update the monitoring CUSUM. When 100% inspection is in progress, the question arises as to how many sequential observations should be grouped together in forming successive samples. The tabular form of the CUSUM has three parameters: the sample size n, the reference value k, and the decision interval h, and these parameters are usually chosen using statistical or economic-statistical criteria, which are based on Average Run Length (ARL). Unlike earlier studies, this investigation uses steady-state ARL rather than zero-state ARL, and the occurrence of the shift can be anywhere within a sample. The principal finding is that there is a significant gain in the performance of the CUSUM when the sample size (n) is set at one, and this CUSUM might be termed the Bernoulli CUSUM. The advantage of using n=1 is greater for larger shifts and for smaller values of in-control ARL. First version: September 1998/Third revision: September 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号