首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
Characterization of normal distribution related to two samples based on second conditional moments has been obtained. This characterization has been transformed to a characterization based on the UMVU estimators of the density function. These results are generalized to k samples from normal distributions. Finally applications of these characterization results to goodness-of-fit test are discussed.  相似文献   

2.
Milan Stehlík 《Metrika》2003,57(2):145-164
The aim of this paper is to give some results on the exact density of the I-divergence in the exponential family with gamma distributed observations. It is shown in particular that the I-divergence can be decomposed as a sum of two independent variables with known distributions. Since the considered I-divergence is related to the likelihood ratio statistics, we apply the method to compute the exact distribution of the likelihood ratio tests and discuss the optimality of such exact tests. One of these tests is the exact LR test of the model which is asymptotically optimal in the Bahadur sense. Numerical examples are provided to illustrate the methods discussed. Received: January 2002 Acknowledgements. I am grateful to Prof. Andrej Pázman for helpful discussions during the setup and the preparation of the paper and to the referees for constructive comments on earlier versions of the paper. Research is supported by the VEGA grant (Slovak Grant Agency) No 1/7295/20  相似文献   

3.
This article reviews the content-corrected method for tolerance limits proposed by Fernholz and Gillespie (2001) and addresses some robustness issues that affect the length of the corresponding interval as well as the corrected content value. The content-corrected method for k-factor tolerance limits consists of obtaining a bootstrap corrected value p * that is robust in the sense of preserving the confidence coefficient for a variety of distributions. We propose several location/scale robust alternatives to obtain robust corrected-content k-factor tolerance limits that produce shorter intervals when outlying observations are present. We analyze the Hadamard differentiability to insure bootstrap consistency for large samples. We define the breakdown point for the particular statistic to be bootstrapped, and we obtain the influence function and the value of the breakdown point for the traditional and the robust statistics. Two examples showing the advantage of using these robust alternatives are also included.  相似文献   

4.
For the sequences of independent identically distributed random variables with continuous distributions, we provide the optimal upper bounds for the increments of order and record statistics under condition that the values of future order statistics and records are known. The bounds are expressed in terms of quantiles and absolute moments centered about the quantiles of the parent distribution. We also describe the distributions which approach the bounds with arbitrary desired accuracy.The second author was supported by the Polish State Committee for Scientific Research (KBN) under Grant 5 P03A 012 20Received November 2003  相似文献   

5.
Klaus Ziegler 《Metrika》2001,53(2):141-170
In the nonparametric regression model with random design and based on i.i.d. pairs of observations (X i, Y i), where the regression function m is given by m(x)=?(Y i|X i=x), estimation of the location θ (mode) of a unique maximum of m by the location of a maximum of the Nadaraya-Watson kernel estimator for the curve m is considered. In order to obtain asymptotic confidence intervals for θ, the suitably normalized distribution of is bootstrapped in two ways: we present a paired bootstrap (PB) where resampling is done from the empirical distribution of the pairs of observations and a smoothed paired bootstrap (SPB) where the bootstrap variables are generated from a smooth bivariate density based on the pairs of observations. While the PB requires only relatively small computational effort when carried out in practice, it is shown to work only in the case of vanishing asymptotic bias, i.e. of “undersmoothing” when compared to optimal smoothing for mode estimation. On the other hand, the SPB, although causing more intricate computations, is able to capture the correct amount of bias if the pilot estimator for m oversmoothes. Received: May 2000  相似文献   

6.
The theory of unimodular matrices has been applied to deriveD-optimal main effect plans fors 1 ×s 2 factorials ins 1 +s 2 runs. Plans with highA-efficiency have also been given.  相似文献   

7.
8.
InM-estimation of the regression parameter vector in the linear model, we discuss the choice of the support of certain re-descendingψ-functions for both cases when the distribution of the i.i.d. errors is partially known and when it is completely functionally unknown. Research supported in part by the Natural Sciences and Engineering Research Council of Canada.  相似文献   

9.
Summary In this paper we consider the problem of determiningE-optimal block designs for experimental situations in whichv treatments are to be tested onn experimental units arranged inb blocks and where the block sizes and number of replications assigned to the treatments are allowed to vary. Some sufficient conditions are obtained for designs to beE-optimal in these situations and methods for constructing designs which satisfy the sufficient conditions given are also derived.  相似文献   

10.
The Dykstra and Laud's extended gamma process is assumed as prior process over the cumulative hazard function, (t), for a bayesian nonparametric estimation of the reliability functionR(t) from both exact and censored data. Particular cases and interpretative aspects are discussed also in the light of an illustrative example.This research was supported by Ministero della Pubblica Istruzione, grant 40% (Gruppo di ricerca «Modelli probabilistici»).  相似文献   

11.
Stavros Kourouklis 《Metrika》2000,51(2):173-179
A characterization result of Kushary (1998) regarding universal admissibility of equivariant estimators in the one parameter gamma distribution is generalized to a scale family of distributions with monotone likelihood ratio. New examples are given, among them the F-distribution with a scale parameter. In particular, universal admissibility is characterized within the class of location-scale equivariant estimators of the ratio of the variances of two normal distributions with unknown means. In this context the maximum likelihood estimator is shown to be universally inadmissible by virtue of a general sufficient condition for universal inadmissibility of a scale equivariant estimator. Received: January 2000  相似文献   

12.
Yijun Zuo 《Metrika》2000,52(1):69-75
Finite sample tail behavior of the Tukey-Donoho halfspace depth based multivariate trimmed mean is investigated with respect to a tail performance measure. It turns out that the tails of the sampling distribution of the α-depth-trimmed mean approach zero at least ⌊αn⌋ times as fast as the tails of the underlying population distribution and could be n−⌊αn⌋+ 1 times as fast. In addition, there is an intimate relationship among the tail behavior, the halfspace depth, and the finite sample breakdown point of the estimator. It is shown that the lower tail performance bound of the depth based trimmed mean is essentially the same as its halfspace depth and the breakdown point. This finding offers a new insight into the notion of the halfspace depth and extends the important role of the tail behavior as a quantitative assessment of robustness in the regression (He, Jurečková, Koenker and Portnoy (1990)) and the univariate location settings (Jurečková (1981)) to the multivariate location setting. Received: 1 July 1999  相似文献   

13.
The existence of stationary processes of temporary equilibria is examined in an OLG model, where there are finitely many commodities and consumers in each period, and endowments profiles and expectations profiles are subject to stochastic shocks. A state space is taken as the set of all payoff-relevant variables, and dynamics of the economy is captured as a stochastic process in the state space. In our model, however, the state space does not necessarily admit a compact-truncation consistent with the intertemporal restrictions because distributions over expectations profiles may have non-compact supports. As shown in Duffie et al. [Duffie, D., Geanakoplos, J., Mas-Colell, A., McLennan, A., 1994. Stationary Markov equilibria. Econometrica 62, 745–781), such a compact-truncation, called a self-justified set, is essential for the existence of stationary Markov equilibria. We extend their existence theorem so as to be applicable to our model.  相似文献   

14.
This paper studies the stability of a stochastic optimal growth economy introduced by Brock and Mirman [Brock, W.A., Mirman, L., 1972. Optimal economic growth and uncertainty: the discounted case. Journal of Economic Theory 4, 479–513] by utilizing stochastic monotonicity in a dynamic system. The construction of two boundary distributions leads to a new method of studying systems with non-compact state space. The paper shows the existence of a unique invariant distribution. It also shows the equivalence between the stability and the uniqueness of the invariant distribution in this dynamic system.  相似文献   

15.
M. C. Jones 《Metrika》2002,54(3):215-231
Relationships between F, skew t and beta distributions in the univariate case are in this paper extended in a natural way to the multivariate case. The result is two new distributions: a multivariate t/skew t distribution (on ℜm) and a multivariate beta distribution (on (0,1)m). A special case of the former distribution is a new multivariate symmetric t distribution. The new distributions have a natural relationship to the standard multivariate F distribution (on (ℜ+)m) and many of their properties run in parallel. We look at: joint distributions, mathematically and graphically; marginal and conditional distributions; moments; correlations; local dependence; and some limiting cases. Received: March 2001  相似文献   

16.
It is shown that fractional factorial plans represented by orthogonal arrays of strength three are universally optimal under a model that includes the mean, all main effects and all two-factor interactions between a specified factor and each of the other factors. Thus, such plans exhibit a kind of model robustness in being universally optimal under two different models. Procedures for obtaining universally optimal block designs for fractional factorial plans represented by orthogonal arrays are also discussed. Acknowledgements. The authors wish to thank the referees for making several useful comments on a previous version.  相似文献   

17.
Max Happacher 《Metrika》2001,53(2):171-181
The problem of rounding finitely many (continuous) probabilities to (discrete) proportions N i/n is considered, for some fixed rounding accuracy n. It is well known that the rounded proportions need not sum to unity, and instead may leave a nonzero discrepancy D=(∑N i) −n. We determine the distribution of D, assuming that the rounding function used is stationary and that the original probabilities follow a uniform distribution. Received: April 1999  相似文献   

18.
Edit Gombay 《Metrika》2000,52(2):133-145
Large sample approximations for sequential U-statistics using anti-symmetric kernels are considered both under no-change and change hypotheses. These new approximations make it easy to perform sequential tests. Received: November 1999  相似文献   

19.
The problem of sequentially estimating an unknown distribution parameter of a particular exponential family of distributions is considered under LINEX loss function for estimation error and a cost c > 0 for each of an i.i.d. sequence of potential observations X 1, X 2, . . . A Bayesian approach is adopted and conjugate prior distributions are assumed. Asymptotically pointwise optimal and asymptotically optimal procedures are derived.  相似文献   

20.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号