首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
U-type designs and orthogonal Latin hypercube designs (OLHDs) have been used extensively for performing computer experiments. Both have good spaced filling properties in one-dimension. U-type designs may not have low correlations among the main effects, quadratic effects and two-factor interactions. On the other hand, OLHDs are hard to be found due to their large number of levels for each factor. Recently, alternative classes of U-type designs with zero or low correlations among the effects of interest appear in the literature. In this paper, we present new classes of U-type or quantitative \(3\) -orthogonal designs for computer experiments. The proposed designs are constructed by combining known combinatorial structures and they have their main effects pairwise orthogonal, orthogonal to the mean effect, and orthogonal to both quadratic effects and two-factor interactions.  相似文献   

2.
A critical discussion of a comparative growth analysis about Central and Eastern European (CEE) countries is performed. The main conclusion is that there was economic convergence for most CEE accession candidates, but not between them and Western Europe. Results do justify a separation into first and second-wave accession countries, but also undermine differences in Central and Eastern Europe between accession and non-accession countries.This paper critically examines theories and empirical studies for three types of convergence, namely , and club convergence. Each can be in absolute terms or conditional to the long-term equilibrium (steady state) for each country.Empirical results are provided for all types of convergence from 1996 to 2000, both with population-weighted and non-weighted data. The analysis is performed for differently framed country subgroups considering even Western Europe for better comparability. Once absolute convergence is found through a unit root test about a standard deviation time series of cross-sectional income per capita, the regression coefficient for initial income per capita with the average growth over the sample period as dependent variable ( convergence) establishes the speed of this process. The same method applies to the conditional version by using the distance of the income from the corresponding steady state instead of the level of GDP. Then Markov chain probability matrixes (club convergence) provide information about the past behaviour of the whole cross-sectional income distribution over time, but also about intra-mobility of single countries.  相似文献   

3.
Elfvings method is a well known graphical procedure for obtaining c-optimal designs. Although the method is valid for any dimension it is rarely used for more than two parameters. Its usefulness seems to be constrained by the difficulty on the construction of the Elfving set. In this paper a computational procedure for finding c-optimal designs using Elfvings method for more than two dimensions is provided. It is suitable for any model, achieving an efficient performance for three or four dimensional models. The procedure can be implemented in most programming languages.Acknowledgments. The authors would like to thank Dr. Torsney for his helpful comments. This research was supported by a grant from JCyL SA004/01.  相似文献   

4.
In this article we analyse the number of no opinion answers to attitude items. We argue that that number can be considered as a count variable that should be analysed using Poisson regression or negative binomial regression. Since we're interested in the effect of both respondent and interviewer characteristics on the number of no opinion's we use multilevel analysis that takes into account the hierarchical structure of the data. As a consequence multilevel Poisson regression and multilevel negative binomial regression are applied. Our analysis shows that answering no opinion is related to some sociodemographic respondent characteristics. In addition we find a significant interviewer effect, but we are not able to explain that effect in terms of interviewer variables.  相似文献   

5.
A statistical treatment of the problem of division   总被引:1,自引:0,他引:1  
The problem of division is one of the most important problems in the emergence of probability. It has been long considered solved from a probabilistic viewpoint. However, we do not find the solution satisfactory. In this study, the problem is recasted as a statistical problem. The outcomes of matches of the game are considered as an infinitely exchangeable random sequence and predictors/estimators are constructed in light of de Finetti representation theorem. Bounds of the estimators are derived over wide classes of priors (mixing distributions). We find that, although conservative, the classical solutions are justifiable by our analysis while the plug-in estimates are too optimistic for the winning player.Acknowledgement. The authors would like to thank the referees for the insightful and informative suggestions and, particularly, for referring us to important references.Supported by NSC-88-2118-M-259-009.Supported in part by NSC 89-2118-M-259-012.Received August 2002  相似文献   

6.
We present an alternative proof of the Gibbards random dictatorship theorem with ex post Pareto optimality. Gibbard(1977) showed that when the number of alternatives is finite and larger than two, and individual preferences are linear (strict), a strategy-proof decision scheme (a probabilistic analogue of a social choice function or a voting rule) is a convex combination of decision schemes which are, in his terms, either unilateral or duple. As a corollary of this theorem (credited to H. Sonnenschein) he showed that a decision scheme which is strategy-proof and satisfies ex post Pareto optimality is randomly dictatorial. We call this corollary the Gibbards random dictatorship theorem. We present a proof of this theorem which is direct and follows closely the original Gibbards approach. Focusing attention to the case with ex post Pareto optimality our proof is more simple and intuitive than the original Gibbards proof.Received: 15 October 2001, Accepted: 23 May 2003, JEL Classification: D71, D72Yasuhito Tanaka: The author is grateful to an anonymous referee and the Associate editor of this journal for very helpful comments and suggestions. And this research has been supported by a grant from the Zengin Foundation for Studies on Economics and Finance in Japan.  相似文献   

7.
The interest in Data Envelopment Analysis (DEA) as a method for analyzing the productivity of homogeneous Decision Making Units (DMUs) has significantly increased in recent years. One of the main goals of DEA is to measure for each DMU its production efficiency relative to the other DMUs under analysis. Apart from a relative efficiency score, DEA also provides reference DMUs for inefficient DMUs. An inefficient DMU has, in general, more than one reference DMU, and an efficient DMU may be a reference unit for a large number of inefficient DMUs. These reference and efficiency relations describe a net which connects efficient and inefficient DMUs. We visualize this net by applying Sammons mapping. Such a visualization provides a very compact representation of the respective reference and efficiency relations and it helps to identify for an inefficient DMU efficient DMUs respectively DMUs with a high efficiency score which have a similar structure and can therefore be used as models. Furthermore, it can also be applied to visualize potential outliers in a very efficient way.JEL Classification: C14, C61, D24, M2  相似文献   

8.
9.
10.
Mixed-level designs are widely used in the practical experiments. When the levels of some factors are difficult to be changed or controlled, fractional factorial split-plot (FFSP) designs are often used. This paper investigates the sufficient and necessary conditions for a ${2^{(n_{1}+n_{2})-(k_1+k_2)}4_s^{1}}$ FFSP design with resolution III or IV to have various clear factorial effects, including two types of main effects and three types of two-factor interaction components. The structures of such designs are shown and illustrated with examples.  相似文献   

11.
This paper reports a new tool for assessing thereliability of text interpretationsheretofore unavailable to qualitative research.It responds to a combination of twochallenges, the problem of assessing the reliabilityof multiple interpretations – asolution to this problem was anticipated earlier(Krippendorff, 1992) but not fullydeveloped – and the problem of identifying unitsof analysis within a continuum of textand similar representations (Krippendorff, 1995).The paper sketches the family of-coefficients, which this paper extends, andthen describes its new arrival.A computational example is included in the Appendix.  相似文献   

12.
A one-sided testing problem based on an i.i.d. sample of observations is considered. The usual one-sided sequential probability ratio test would be based on a random walk derived from these observations. Here we propose a sequential test where the random walk is replaced by Lindleys random walk which starts anew at zero as soon as it becomes negative. We derive the asymptotics of the expected sample size and the error probabilities of this sequential test. We discuss the advantages of this test for certain nonsymmetric situations.Acknowledgement. The authors thank the referee for helpful comments and suggestions. Their research was supported by the German Research Foundation (DFG) and the Russian Foundation for Basic Research (RFBR).  相似文献   

13.
A normality assumption is usually made for the discrimination between two stationary time series processes. A nonparametric approach is desirable whenever there is doubt concerning the validity of this normality assumption. In this paper a nonparametric approach is suggested based on kernel density estimation firstly on (p+1) sample autocorrelations and secondly on (p+1) consecutive observations. A numerical comparison is made between Fishers linear discrimination based on sample autocorrelations and kernel density discrimination for AR and MA processes with and without Gaussian noise. The methods are applied to some seismological data.  相似文献   

14.
It is shown that fractional factorial plans represented by orthogonal arrays of strength three are universally optimal under a model that includes the mean, all main effects and all two-factor interactions between a specified factor and each of the other factors. Thus, such plans exhibit a kind of model robustness in being universally optimal under two different models. Procedures for obtaining universally optimal block designs for fractional factorial plans represented by orthogonal arrays are also discussed. Acknowledgements. The authors wish to thank the referees for making several useful comments on a previous version.  相似文献   

15.
Boxin Tang  Julie Zhou 《Metrika》2013,76(3):325-337
Theoretical results are derived for finding D-optimal two-level orthogonal arrays for estimating main effects and some specified two-factor interactions. The upper bounds of the determinant of the related matrix for D-optimality are obtained. For run sizes 12 and 20, D-optimal orthogonal arrays are found for all requirement sets with three or less two-factor interactions, and all the D-optimal orthogonal arrays except one can be constructed by sequentially collecting columns. Examples are also given for run sizes 36 and 52 to discuss the guidelines to construct D-optimal orthogonal arrays.  相似文献   

16.
Fractional factorial plans represented by orthogonal arrays of strength two are known to be optimal in a very strong sense under a model that includes the mean and all the main effects, when all interactions are assumed to be absent. When a fractional factorial plan given by an orthogonal array of strength two is not saturated, one might think of entertaining some two-factor interactions also in the model. In such a situation, it is of interest to examine which of the two-factor interactions can be estimated via a plan represented by an orthogonal array, as also to study the overall efficiency of the plan when some interactions are in the model alongwith the mean and all main effects. In this paper, an attempt has been made to examine these issues by considering some practically useful plans for asymmetric (mixed level) factorials with small number of runs.  相似文献   

17.
18.
Two-level screening designs are appropriate for situations where a large number of factors (q) is examined but relatively few (k) of these are expected to be important. It is not knownwhich of theq factors will be the important ones, that is, it is not known whichk dimensions of the experimental space will be of further interest. After the results of the design have received a first analysis, the design will be projected into thek dimensions of interest. These projections are investigated for Plackett and Burman type-screening designs withq23 factors, andk=3, 4, and 5.  相似文献   

19.
Orthogonal main-effect plans for two and three factors in small blocks are obtained from the dual of adjusted orthogonal row-column designs. The method for constructing efficient plans is presented, and a relationship between the average efficiency factors of the row-column design and the corresponding main effects is given for the two-factor case. Orthogonal Main Effect.  相似文献   

20.
This paper analyzes the properties of aggregate excess demand functions for economies with an arbitrary finite set of N commodities where agents face trading restrictions of a general, abstract form: their budget set is defined by K-dimensional planes in N. It is shown that, if there are at least K agents in the economy, the only general property satisfied by the value of aggregate excess demand and its derivative, at any arbitrary point, is Walras Law. The result is established by considering an economy where agents' preferences are of a ‘generalized Leontief' type.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号