首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
The role of uniformity measured by the centered L 2-discrepancy (Hickernell 1998a) has been studied in fractional factorial designs. The issue of a lower bound for the centered L 2-discrepancy is crucial in the construction of uniform designs. Fang and Mukerjee (2000) and Fang et al. (2002, 2003b) derived lower bounds for fractions of two- and three-level factorials. In this paper we report some new lower bounds for the centered L 2-discrepancy for a set of asymmetric fraction factorials. Using these lower bounds helps to measure uniformity of a given design. In addition, as an application of these lower bounds, we propose a method to construct uniform designs or nearly uniform designs with asymmetric factorials.  相似文献   

2.
Yan Liu  Min-Qian Liu 《Metrika》2012,75(1):33-53
Supersaturated designs (SSDs) have been highly valued in recent years for their ability of screening out important factors in the early stages of experiments. Recently, Liu and Lin (in Statist Sinica 19:197–211, 2009) proposed a method to construct optimal mixed-level SSDs from smaller multi-level SSDs and transposed orthogonal arrays (OAs). This paper extends their method to construct more equidistant optimal SSDs by replacing the multi-level SSDs and transposed OAs with mixed-level SSDs and general transposed difference matrices, respectively, and then proposes two practical methods for constructing weak equidistant SSDs based on this extended method. A large number of new optimal SSDs can be constructed from these three methods. Some examples are provided and more new designs are listed in “Appendix” for practical use.  相似文献   

3.
In this paper we present a new method for constructing multi-level supersaturated designs with n rows, m columns and the equal occurrence property. We investigate the existence of multi-level supersaturated designs using a single generator vector and its k-cyclic permutations as rows. We find the conditions needed, in order this vector to generate a balanced supersaturated design. These conditions are simplified for the case of three level supersaturated designs. By using the proposed method three level supersaturated designs are constructed and explored. Moreover, many new, optimal and near optimal, multi-level supersaturated designs are provided as well.  相似文献   

4.
A general method for construction of E(s 2)-optimal, two-level supersaturated designs (SSDs) with the equal occurrence property, from supplementary difference sets is introduced. It is proved that SSDs constructed in this way are E(s 2)-optimal. Comparisons are made with previous works and it is shown that the proposed method gives promising results for the construction of E(s 2)-optimal large SSDs.  相似文献   

5.
Dey and Midha (Biometrika 83(2):484–489, 1996) constructed optimal block designs for complete diallel cross experiment using triangular partially balanced incomplete block designs with two associate classes. They listed optimal block designs for the lines in the range from 5 ≤ v ≤ 10. In this paper, we are also proposing additional optimal block designs for complete diallel cross experiment using two associate class partially balanced block designs for some additional values of v. Our method yields designs for proper and non-proper settings for complete diallel cross experiments. The proper and non proper designs are optimal in the sense of Kempthorne (Genetics 41:451–459, 1956) and non-proper designs are universally optimal in the sense of Kiefer (A survey of statistical design and linear models, North Holland, Amsterdam, 1975). The list of practically important designs is also given.  相似文献   

6.
Gumbel’s Identity equates the Bonferroni sum with the k ‐ th binomial moment of the number of events Mn which occur, out of n arbitrary events. We provide a unified treatment of familiar probability bounds on a union of events by Bonferroni, Galambos–Rényi, Dawson–Sankoff, and Chung–Erdös, as well as less familiar bounds by Fréchet and Gumbel, all of which are expressed in terms of Bonferroni sums, by showing that all these arise as bounds in a more general setting in terms of binomial moments of a general non‐negative integer‐valued random variable. Use of Gumbel’s Identity then gives the inequalities in familiar Bonferroni sum form. This approach simplifies existing proofs. It also allows generalization of the results of Fréchet and Gumbel to give bounds on the probability that at least t of n events occur for any A further consequence of the approach is an improvement of a recent bound of Petrov which itself generalizes the Chung–Erdös bound.  相似文献   

7.
Summary This paper studies the problem of estimation of the total weight of objects using a chemical balance weighing design under the restriction |L−R| ≤a, whereL andR represent the number of objects placed on the left and right pans, respectively. A lower bound for the variance of the estimated total weight is given and a necessary and sufficient condition for this lower bound to be attained is obtained. Finally, weighing designs for which this lower bound is attainable are constructed.  相似文献   

8.
p‐Values are commonly transformed to lower bounds on Bayes factors, so‐called minimum Bayes factors. For the linear model, a sample‐size adjusted minimum Bayes factor over the class of g‐priors on the regression coefficients has recently been proposed (Held & Ott, The American Statistician 70(4), 335–341, 2016). Here, we extend this methodology to a logistic regression to obtain a sample‐size adjusted minimum Bayes factor for 2 × 2 contingency tables. We then study the relationship between this minimum Bayes factor and two‐sided p‐values from Fisher's exact test, as well as less conservative alternatives, with a novel parametric regression approach. It turns out that for all p‐values considered, the maximal evidence against the point null hypothesis is inversely related to the sample size. The same qualitative relationship is observed for minimum Bayes factors over the more general class of symmetric prior distributions. For the p‐values from Fisher's exact test, the minimum Bayes factors do on average not tend to the large‐sample bound as the sample size becomes large, but for the less conservative alternatives, the large‐sample behaviour is as expected.  相似文献   

9.
In the first phase of pharmaceutical development, and assuming that the probability of positive response increases with dose, the main statistical goal is to estimate a percentile of the dose–response function for a given target value Γ. We compare the Maximum Likelihood and centred isotonic regression estimators of the target dose and we discuss several performance criteria to assess inferential precision, the amount of toxicity exposure and the trade-off between them for a set of some exemplary adaptive designs. We compare these designs using graphical tools. Several scenarios are considered using simulation, including the use of several start-up rules, the change of slope of the dose-toxicity function at the target dose and also different theoretical models, as logistic, normal or skew-normal distribution functions.  相似文献   

10.
For paired choice experiments, two new construction methods of designs are proposed for the estimation of the main effects. In many cases, these designs require about 30–50% fewer choice pairs than the existing designs and at the same time have reasonably high D-efficiencies for the estimation of the main effects. Furthermore, as against the existing efficient designs, our designs have higher D-efficiencies for the same number of choice pairs.  相似文献   

11.
In this paper, a new randomized response model is proposed, which is shown to have a Cramer–Rao lower bound of variance that is lower than the Cramer–Rao lower bound of variance suggested by Singh and Sedory at equal protection or greater protection of respondents. A new measure of protection of respondents in the setup of the efficient use of two decks of cards, because of Odumade and Singh, is also suggested. The developed Cramer–Rao lower bounds of variances are compared under different situations through exact numerical illustrations. Survey data to estimate the proportion of students who have sometimes driven a vehicle after drinking alcohol and feeling over the legal limit are collected by using the proposed randomization device and then analyzed. The proposed randomized response technique is also compared with a black box technique within the same survey. A method to determine minimum sample size in randomized response sampling based on a small pilot survey is also given.  相似文献   

12.
The nineteenth century is a century of reform for the Ottoman state. The Tanzimat reforms hold a unique place in the Ottoman history of modernization. During the Tanzimat period (1839–1878), the state underwent a restructuring process in almost all of its institutions to establish a centralized modern state and many new institutions were established. The Ottomans paid special attention to education to train the new generation required for the continuity of modernization and the centralized bureaucratic structure. While they opened modern high schools and higher education institutions, they attempted to reform the existing s?byan schools, which were the primary education institutions. This process of restructuring education was also carried out in Cyprus, which had been an Ottoman island since 1571. These attempts remained restricted to efforts to increase the number of s?byan schools in Cyprus. There was a failure to replace the religious education given at schools with a secular program, curriculum or modern education system based on education management. This situation also adversely affected the quality of education at the Rü?tiye School in Nicosia, the first and only modern secondary school of the period and which was opened in Nicosia in 1864.  相似文献   

13.
The aim of this paper is to establish recurrence relations satisfied by product moments and covariances of kth records arising from discrete distributions. They will be evaluated for geometric underlying distribution. Then we use these results to obtain formulas for correlation coefficients of geometric kth records. We consider all three known types of kth records: strong, ordinary, and weak.  相似文献   

14.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

15.
Sudeep R. Bapat 《Metrika》2018,81(8):1005-1024
The first part of this paper deals with developing a purely sequential methodology for the point estimation of the mean \(\mu \) of an inverse Gaussian distribution having an unknown scale parameter \(\lambda \). We assume a weighted squared error loss function and aim at controlling the associated risk function per unit cost by bounding it from above by a known constant \(\omega \). We also establish first-order and second-order asymptotic properties of our stopping rule. The second part of this paper deals with obtaining a purely sequential fixed accuracy confidence interval for the unknown mean \(\mu \), assuming that the scale parameter \(\lambda \) is known. First-order asymptotic efficiency and asymptotic consistency properties are also built of our proposed procedures. We then provide extensive sets of simulation studies and real data analysis using data from fatigue life analysis to show encouraging performances of our proposed stopping strategies.  相似文献   

16.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

17.
This gives a complete proof for bounds of the efficiency, conjectured by Das and Kageyama (1992), on robustness of extended balanced incomplete block designs against the unavailability of any number of observations in a block.  相似文献   

18.
19.
U-type designs and orthogonal Latin hypercube designs (OLHDs) have been used extensively for performing computer experiments. Both have good spaced filling properties in one-dimension. U-type designs may not have low correlations among the main effects, quadratic effects and two-factor interactions. On the other hand, OLHDs are hard to be found due to their large number of levels for each factor. Recently, alternative classes of U-type designs with zero or low correlations among the effects of interest appear in the literature. In this paper, we present new classes of U-type or quantitative \(3\) -orthogonal designs for computer experiments. The proposed designs are constructed by combining known combinatorial structures and they have their main effects pairwise orthogonal, orthogonal to the mean effect, and orthogonal to both quadratic effects and two-factor interactions.  相似文献   

20.
A major concern about the use of simulation models regards their relationship with the empirical data. The identification of a suitable indicator quantifying the distance between the model and the data would help and guide model selection and output validation. This paper proposes the use of a new criterion, called GSL-div and developed in Lamperti (Econ Stat, 2017.  https://doi.org/10.1016/j.ecosta.2017.01.006), to assess the degree of similarity between the dynamics observed in the data and those generated by the numerical simulation of models. As an illustrative application, this approach is used to distinguish between different versions of the well known asset pricing model with heterogeneous beliefs proposed in Brock and Hommes (J Econ Dyn Control 22(8–9):1235–1274, 1998.  https://doi.org/10.1016/S0165-1889(98)00011-6). Once the discrimination ability of the GSL-div is proved, model’s dynamics are directly compared with actual data coming from two major stock market indexes (EuroSTOXX 50 for Europe and CSI 300 for China). Results show that the model, once calibrated, is fairly able to track the evolution of both the two indexes, even though a better fit is reported for the Chinese stock market. However, I also find that many different combinations of traders’ behavioural rules are compatible with the same observed dynamics. Within this heterogeneity, an emerging common trait is found: to be empirically valid, the model has to account for a strong trend following component, which might either come from a unique trend type that heavily extrapolates information from past observations or the combinations of different types with milder, or even opposite, attitudes towards the trend.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号