首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
2.
Abstract  The comparison of the unknown means of two populations with unknown variances is called the B ehrens -F isher problem, if the populations are assumed to be normal and the ratio of the variances is not known. In this paper a summary of recipes is given to solve this problem in practice, as published in the past 35 years by B anerjee , F isher and B ehrens , P agurova , W ald and H ajek , W elch , W elch and A spin , together with two large sample solutions and one solution often used as an approximate one without justification.
The solutions are presented mainly in terms of confidence intervals for the difference of the population means. Some remarks are made concerning the lengths of these intervals and the power of the corresponding tests. The solutions in this paper are dependent on the means and the variances of samples drawn from the two populations only. All solutions discussed, except the disqualified approximate one, are robust against violations of the normality assumptions with respect to the populations and they provide, at least asymptotically, good measures for the difference of the population means if the samples are drawn from populations whatsoever with finite second moment.  相似文献   

3.
《Journal of econometrics》2003,114(1):165-196
This paper re-visits the problem of estimating the regression error variance in a linear multiple regression model after preliminary hypothesis tests for either linear restrictions on the coefficients or homogeneity of variances. There is an extensive literature that discusses these problems, particularly in terms of the sampling properties of the pre-test estimators using various loss functions as the basis for risk analysis. In this paper, a unified framework for analysing the risk properties of these estimators is developed under a general class of loss structures that incorporates virtually all first-order differentiable losses. Particular consideration is given to the choice of critical values for the pre-tests. Analytical results indicate that an α-level substantially higher than those normally used may be appropriate for optimal risk properties under a wide range of loss functions. The paper also generalizes some known analytical results in the pre-test literature and proves other results only previously shown numerically.  相似文献   

4.
This paper considers the problem of forecasting realized variance measures. These measures are highly persistent estimates of the underlying integrated variance, but are also noisy. Bollerslev, Patton and Quaedvlieg (2016), Journal of Econometrics 192(1), 1–18 exploited this so as to extend the commonly used heterogeneous autoregressive (HAR) by letting the model parameters vary over time depending on the estimated measurement error variances. We propose an alternative specification that allows the autoregressive parameters of HAR models to be driven by a latent Gaussian autoregressive process that may also depend on the estimated measurement error variance. The model parameters are estimated by maximum likelihood using the Kalman filter. Our empirical analysis considers the realized variances of 40 stocks from the S&P 500. Our model based on log variances shows the best overall performance and generates superior forecasts both in terms of a range of different loss functions and for various subsamples of the forecasting period.  相似文献   

5.
This paper generalizes the notion of p-value to obtain a system for assessing evidence in favor of an hypothesis. It is not quite a quantification in that evidence is a pair of numbers (the p-value and the p-value with null and alternative interchanged) with evidence for the alternative being claimed when the first number is small and the second is at least moderately large. Traditional significance tests present p-values as a measure of evidence against a theory. This usage is rarely called for since scientists usually wish to accept theories (for the time being) not just not reject them; they are more interested in evidence for a theory. P-values are not just good or bad for this purpose; their efficacy depends on specifics. We find that a single p-value does not measure evidence for a simple hypothesis relative to a simple alternative, but consideration of both p-values leads to a satisfactory theory. This consideration does not, in general, extend to composite hypotheses since there, best evidence calls for optimization of a bivariate objective function. But in some cases, notably one sided tests for the exponential family, the optimization can be solved, and a single p-value does provide an appealing measure of best evidence for a hypothesis. One possible extension of this theory is proposed and illustrated with a practical safety analysis problem involving the difference of two random variables.  相似文献   

6.
The choice of cells in chi–square goodness of fit tests is a classical problem. Some recent results in this area are discussed. It is shown that the likelihood ratio of alternatives w.r.t. null distributions plays a key role when judging different procedures. The discussion centers on the case of a simple hypothesis, but location–scale models and tests of independence in contingency tables are also considered.  相似文献   

7.
We revisit the methodology and historical development of subsampling, and then explore in detail its use in hypothesis testing, an area which has received surprisingly modest attention. In particular, the general set‐up of a possibly high‐dimensional parameter with data from K populations is explored. The role of centring the subsampling distribution is highlighted, and it is shown that hypothesis testing with a data‐centred subsampling distribution is more powerful. In addition we demonstrate subsampling’s ability to handle a non‐standard Behrens–Fisher problem, i.e., a comparison of the means of two or more populations which may possess not only different and possibly infinite variances, but may also possess different distributions. However, our formulation is general, permitting even functional data and/or statistics. Finally, we provide theory for K ‐ sample U ‐ statistics that helps establish the asymptotic validity of subsampling confidence intervals and tests in this very general setting.  相似文献   

8.
In practice, it is an important problem (especially in quality control) to secure that a known regression function occurs during a certain period in time. In the present paper, we consider the change-point problem that under the null hypothesis this known regression function occurs. As alternative, we consider a certain non-parametric class of functions that is of particular interest in quality control. We analyze this test problem by using partial sums of the data. Asymptotically, we get Brownian motion and Brownian motion with trend (≠0) under the hypothesis and under the alternative, respectively. We prove that tests based on partial sums have a larger power when the partial sums are taken from the time reversed data. This can be quantitatively determined in an asymptotic way by some new results on Kolmogorov type tests for Brownian motion with trend. We illustrate our results by a certain model that is interesting in quality control and by an example with real data.Supported in part by the Deutsche Forschungsgemeinschaft Grant Bi655.Supported in part by the Deutsche Forschungsgemeinschaft Grant Bi655 and by the Swiss National Science Foundation Grant 20-55586.98.  相似文献   

9.
Data weaknesses (such as collinearity) reduce the quality of least-squares estimates by inflating parameter variances. Standard regression diagnostics and statistical tests of hypothesis are unable to indicate such variance inflation and hence cannot detect data weaknesses. In this paper, then, we consider a different means for determining the presence of weak data based on a test for signal-to-noise in which the size of the parameter variance (noise) is assessed relative to the magnitude of the parameter (signal). This test is combined with other collinearity diagnostics to provide a test for the presence of harmful collinearity and/or short data. The entire procedure is illustrated with an equation from the Michigan Quarterly Econometric Model. Tables of critical values for the test are provided in an appendix.  相似文献   

10.
There is a need to test the hypothesis of exponentiality against a wide variety of alternative hypotheses, across many areas of economics and finance. Local or contiguous alternatives are the closest alternatives against which it is still possible to have some power. Hence goodness-of-fit tests should have some power against all, or a huge majority, of local alternatives. Such tests are often based on nonlinear statistics, with a complicated asymptotic null distribution. Thus a second desirable property of a goodness-of-fit test is that its statistic will be asymptotically distribution free. We suggest a whole class of goodness-of-fit tests with both of these properties, by constructing a new version of empirical process that weakly converges to a standard Brownian motion under the hypothesis of exponentiality. All statistics based on this process will asymptotically behave as statistics from a standard Brownian motion and so will be asymptotically distribution free. We show the form of transformation is especially simple in the case of exponentiality. Surprisingly there are only two asymptotically distribution free versions of empirical process for this problem, and only this one has a convenient limit distribution. Many tests of exponentiality have been suggested based on asymptotically linear functionals from the empirical process. We illustrate none of these can be used as goodness-of-fit tests, contrary to some previous recommendations. Of considerable interest is that a selection of well-known statistics all lead to the same test asymptotically, with negligible asymptotic power against a great majority of local alternatives. Finally, we present an extension of our approach that solves the problem of multiple testing, both for exponentiality and for other, more general hypotheses.  相似文献   

11.
Several definitions of individual bioequivalence of two formulations of a medical treatment (drug) have been proposed recently. These definitions attempt to adapt the criterion of average bioequivalence, which would be deficient if substantive treatment heterogeneity were present. In some of the proposed definitions, relatively large differences of means can be compensated by differences in the measurement-error variances. We propose a definition based on a simple latent-variable model which overcomes this anomaly and need not involve the U.S. Food and Drug Administration's 80/125 rule. Our approach is based on a moment-matching estimator of the discrepancy between the outcomes underlying the subjects' responses. The distribution of this estimator is a linear combination of independent χ2 variates; asymptotically, it can be approximated by a normal distribution. Evidence of individual bioequivalence corresponds to rejecting the hypothesis that the discrepancy is greater than a specified threshold. The approach is illustrated by reanalysing two bioequivalence trials.  相似文献   

12.
This paper considers two empirical likelihood-based estimation, inference, and specification testing methods for quantile regression models. First, we apply the method of conditional empirical likelihood (CEL) by Kitamura et al. [2004. Empirical likelihood-based inference in conditional moment restriction models. Econometrica 72, 1667–1714] and Zhang and Gijbels [2003. Sieve empirical likelihood and extensions of the generalized least squares. Scandinavian Journal of Statistics 30, 1–24] to quantile regression models. Second, to avoid practical problems of the CEL method induced by the discontinuity in parameters of CEL, we propose a smoothed counterpart of CEL, called smoothed conditional empirical likelihood (SCEL). We derive asymptotic properties of the CEL and SCEL estimators, parameter hypothesis tests, and model specification tests. Important features are (i) the CEL and SCEL estimators are asymptotically efficient and do not require preliminary weight estimation; (ii) by inverting the CEL and SCEL ratio parameter hypothesis tests, asymptotically valid confidence intervals can be obtained without estimating the asymptotic variances of the estimators; and (iii) in contrast to CEL, the SCEL method can be implemented by some standard Newton-type optimization. Simulation results demonstrate that the SCEL method in particular compares favorably with existing alternatives.  相似文献   

13.
This paper proposes a likelihood ratio test for rank deficiency of a submatrix of the cointegrating matrix. Special cases of the test include the one of invalid normalization in systems of cointegrating equations, the feasibility of permanent–transitory decompositions and of subhypotheses related to neutrality and long‐run Granger noncausality. The proposed test has a chi‐squared limit distribution and indicates the validity of the normalization with probability one in the limit, for valid normalizations. The asymptotic properties of several derived estimators of the rank are also discussed. It is found that a testing procedure that starts from the hypothesis of minimal rank is preferable.  相似文献   

14.
Summary We consider a new principle for constructing tests for one simple hypothesis against another simple hypothesis. In order to avoid a certain povery of the Neyman-Pearson approach, we regard more than two issues of a test, which are ordered according to the strength of support an observation is giving to one hypothesis against the other. As a measure of support we don't take the quotient of the likelihoods but the quotient of the two probabilities of some appropriate region in the sample space. Therefore a frequency interpretation of our tests is possible.  相似文献   

15.
16.
We construct two classes of smoothed empirical likelihood ratio tests for the conditional independence hypothesis by writing the null hypothesis as an infinite collection of conditional moment restrictions indexed by a nuisance parameter. One class is based on the CDF; another is based on smoother functions. We show that the test statistics are asymptotically normal under the null hypothesis and a sequence of Pitman local alternatives. We also show that the tests possess an asymptotic optimality property in terms of average power. Simulations suggest that the tests are well behaved in finite samples. Applications to some economic and financial time series indicate that our tests reveal some interesting nonlinear causal relations which the traditional linear Granger causality test fails to detect.  相似文献   

17.
A simulation study was conducted to investigate the effect of non normality and unequal variances on Type I error rates and test power of the classical factorial anova F‐test and different alternatives, namely rank transformation procedure (FR), winsorized mean (FW), modified mean (FM) and permutation test (FP) for testing interaction effects. Simulation results showed that as long as no significant deviation from normality and homogeneity of the variances exists, generally all of the tests displayed similar results. However, if there is significant deviation from the assumptions, the other tests are observed to be affected at considerably high levels except FR and FP tests. As a result, when the assumptions of factorial anova F‐test are not met or, in the case those assumptions are not tested whether met, it can be concluded that using FR and FP tests is more suitable than the classical factorial anova F‐test.  相似文献   

18.
This paper empirically examines the possibility that there is leakage of information regarding a merger prior to the announcement of the first bid for the target firm. The tests for the existence of market anticipation are based on the behavior of variances implied in the premia of call options listed on the target firms' stocks. We conclude that the evidence is consistent with the hypothesis that the market anticipates an acquisition prior to the first announcement.  相似文献   

19.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

20.
This paper is concerned with the study of a circular random distribution called geodesic normal distribution recently proposed for general manifolds. This distribution, parameterized by two real numbers associated to some specific location and dispersion concepts, looks like a standard Gaussian on the real line except that the support of this variable is [0, 2π) and that the Euclidean distance is replaced by the geodesic distance on the circle. Some properties are studied and comparisons with the von Mises distribution in terms of intrinsic and extrinsic means and variances are provided. Finally, the problem of estimating the parameters through the maximum likelihood method is investigated and illustrated with some simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号