首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 0 毫秒
1.
In this article we analyse the number of no opinion answers to attitude items. We argue that that number can be considered as a count variable that should be analysed using Poisson regression or negative binomial regression. Since we're interested in the effect of both respondent and interviewer characteristics on the number of no opinion's we use multilevel analysis that takes into account the hierarchical structure of the data. As a consequence multilevel Poisson regression and multilevel negative binomial regression are applied. Our analysis shows that answering no opinion is related to some sociodemographic respondent characteristics. In addition we find a significant interviewer effect, but we are not able to explain that effect in terms of interviewer variables.  相似文献   

2.
Elfvings method is a well known graphical procedure for obtaining c-optimal designs. Although the method is valid for any dimension it is rarely used for more than two parameters. Its usefulness seems to be constrained by the difficulty on the construction of the Elfving set. In this paper a computational procedure for finding c-optimal designs using Elfvings method for more than two dimensions is provided. It is suitable for any model, achieving an efficient performance for three or four dimensional models. The procedure can be implemented in most programming languages.Acknowledgments. The authors would like to thank Dr. Torsney for his helpful comments. This research was supported by a grant from JCyL SA004/01.  相似文献   

3.
We explore the relationship between public information and implementable outcomes in an environment characterized by random endowments and private information. We show that if public signals carry no information about private types, then an exact relationship holds: a more informative public signal structure, in the sense of Blackwell, induces a smaller set of ex-ante implementable social choice functions. This holds for a large set of implementation standards, including Nash implementation, and Bayesian incentive compatibility. The result extends the notion, dating to Hirshleifer (1971), that public information can have negative value to an endowment economy under uncertainty.Received: 23 September 2003, Accepted: 30 July 2004, JEL Classification: D80Colin M. Campbell: I thank two referees and seminar participants at the 2002 meetings of the Society for Economic Design, at the 2003 Winter Meetings of the Econometric Society, and at Yale University for helpful input.  相似文献   

4.
5.
The interest in Data Envelopment Analysis (DEA) as a method for analyzing the productivity of homogeneous Decision Making Units (DMUs) has significantly increased in recent years. One of the main goals of DEA is to measure for each DMU its production efficiency relative to the other DMUs under analysis. Apart from a relative efficiency score, DEA also provides reference DMUs for inefficient DMUs. An inefficient DMU has, in general, more than one reference DMU, and an efficient DMU may be a reference unit for a large number of inefficient DMUs. These reference and efficiency relations describe a net which connects efficient and inefficient DMUs. We visualize this net by applying Sammons mapping. Such a visualization provides a very compact representation of the respective reference and efficiency relations and it helps to identify for an inefficient DMU efficient DMUs respectively DMUs with a high efficiency score which have a similar structure and can therefore be used as models. Furthermore, it can also be applied to visualize potential outliers in a very efficient way.JEL Classification: C14, C61, D24, M2  相似文献   

6.
A one-sided testing problem based on an i.i.d. sample of observations is considered. The usual one-sided sequential probability ratio test would be based on a random walk derived from these observations. Here we propose a sequential test where the random walk is replaced by Lindleys random walk which starts anew at zero as soon as it becomes negative. We derive the asymptotics of the expected sample size and the error probabilities of this sequential test. We discuss the advantages of this test for certain nonsymmetric situations.Acknowledgement. The authors thank the referee for helpful comments and suggestions. Their research was supported by the German Research Foundation (DFG) and the Russian Foundation for Basic Research (RFBR).  相似文献   

7.
This paper reports a new tool for assessing thereliability of text interpretationsheretofore unavailable to qualitative research.It responds to a combination of twochallenges, the problem of assessing the reliabilityof multiple interpretations – asolution to this problem was anticipated earlier(Krippendorff, 1992) but not fullydeveloped – and the problem of identifying unitsof analysis within a continuum of textand similar representations (Krippendorff, 1995).The paper sketches the family of-coefficients, which this paper extends, andthen describes its new arrival.A computational example is included in the Appendix.  相似文献   

8.
We present an alternative proof of the Gibbards random dictatorship theorem with ex post Pareto optimality. Gibbard(1977) showed that when the number of alternatives is finite and larger than two, and individual preferences are linear (strict), a strategy-proof decision scheme (a probabilistic analogue of a social choice function or a voting rule) is a convex combination of decision schemes which are, in his terms, either unilateral or duple. As a corollary of this theorem (credited to H. Sonnenschein) he showed that a decision scheme which is strategy-proof and satisfies ex post Pareto optimality is randomly dictatorial. We call this corollary the Gibbards random dictatorship theorem. We present a proof of this theorem which is direct and follows closely the original Gibbards approach. Focusing attention to the case with ex post Pareto optimality our proof is more simple and intuitive than the original Gibbards proof.Received: 15 October 2001, Accepted: 23 May 2003, JEL Classification: D71, D72Yasuhito Tanaka: The author is grateful to an anonymous referee and the Associate editor of this journal for very helpful comments and suggestions. And this research has been supported by a grant from the Zengin Foundation for Studies on Economics and Finance in Japan.  相似文献   

9.
Jainz  M. 《Metrika》2003,58(3):273-277
We show that the projections on four factors of an arbitrary orthogonal array of strength 2 allow the estimation of main effects and two-factor interactions when all other effects are assumed to be zero, if those projections satisfy the bounds given by Weils theorem. The only exceptions are the Hadamard matrices of orders 16 and 24. A consequence is again the estimability of main effects and two-factor interactions for the projections on four factors of the first Payley construction for arbitrary run size.  相似文献   

10.
A statistical treatment of the problem of division   总被引:1,自引:0,他引:1  
The problem of division is one of the most important problems in the emergence of probability. It has been long considered solved from a probabilistic viewpoint. However, we do not find the solution satisfactory. In this study, the problem is recasted as a statistical problem. The outcomes of matches of the game are considered as an infinitely exchangeable random sequence and predictors/estimators are constructed in light of de Finetti representation theorem. Bounds of the estimators are derived over wide classes of priors (mixing distributions). We find that, although conservative, the classical solutions are justifiable by our analysis while the plug-in estimates are too optimistic for the winning player.Acknowledgement. The authors would like to thank the referees for the insightful and informative suggestions and, particularly, for referring us to important references.Supported by NSC-88-2118-M-259-009.Supported in part by NSC 89-2118-M-259-012.Received August 2002  相似文献   

11.
A normality assumption is usually made for the discrimination between two stationary time series processes. A nonparametric approach is desirable whenever there is doubt concerning the validity of this normality assumption. In this paper a nonparametric approach is suggested based on kernel density estimation firstly on (p+1) sample autocorrelations and secondly on (p+1) consecutive observations. A numerical comparison is made between Fishers linear discrimination based on sample autocorrelations and kernel density discrimination for AR and MA processes with and without Gaussian noise. The methods are applied to some seismological data.  相似文献   

12.
In a proportional representation system, apportionment methods are used to round the vote proportion of a party to an integer number of seats in parliament. Assuming uniformly distributed vote proportions, we derive the seat allocation distributions for stationary divisor methods. An important characteristic of apportionment methods are seat biases, that is, expected differences between actual seat numbers and ideal shares of seats, when the parties are ordered from largest to smallest. We obtain seat bias formulas for the stationary divisor methods and for the quota method of greatest remainders.Acknowledgement. We thank Friedrich Pukelsheim for many fruitful discussions.Received March 2004  相似文献   

13.
14.
In this paper, the maximum determinant of the associated 0-1 matrix in D-Optimal saturated main effect plans for 3× s_2 × s_3 factorials, is derived by the use of Graph theory and Combinatorics. The present work is related to a problem suggested by Chatterjee and Narasimhan (2002). Using the theoretical results, we also give the designs for s3s2 + 1. This research was supported by the State Scholarships Foundation of Greece.  相似文献   

15.
Probability theory in fuzzy sample spaces   总被引:2,自引:0,他引:2  
This paper tries to develop a neat and comprehensive probability theory for sample spaces where the events are fuzzy subsets of The investigations are focussed on the discussion how to equip those sample spaces with suitable -algebras and metrics. In the end we can point out a unified concept of random elements in the sample spaces under consideration which is linked with compatible metrics to express random errors. The result is supported by presenting a strong law of large numbers, a central limit theorem and a Glivenko-Cantelli theorem for these kinds of random elements, formulated simultaneously w.r.t. the selected metrics. As a by-product the line of reasoning, which is followed within the paper, enables us to generalize as well as to bring together already known results and concepts from literature.Acknowledgement. The author would like to thank the participants of the 23rd Linz Seminar on Fuzzy Set Theory for the intensive discussion of the paper. Especially he is indebted to Professors Diamond and Höhle whose remarks have helped to get deeper insights into the subject. Additionally, the author is grateful to one anonymous referee for careful reading and valuable proposals which have led to an improvement of the first draft.This paper was presented at the 23rd Linz Seminar on Fuzzy Set Theory, Linz, Austria, February 5–9, 2002.  相似文献   

16.
Many econometric quantities such as long-term risk can be modeled by Pareto-like distributions and may also display long-range dependence. If Pareto is replaced by Gaussian, then one can consider fractional Brownian motion whose increments, called fractional Gaussian noise, exhibit long-range dependence. There are many extensions of that process in the infinite variance stable case. Log-fractional stable noise (log-FSN) is a particularly interesting one. It is a stationary mean-zero stable process with infinite variance, parametrized by a tail index αα between 1 and 2, and hence with heavy tails. The lower the value of αα, the heavier the tail of the marginal distributions. The fact that αα is less than 2 renders the variance infinite. Thus dependence between past and future cannot be measured using the correlation. There are other dependence measures that one can use, for instance the “codifference” or the “covariation”. Since log-FSN is a moving average and hence “mixing”, these dependence measures converge to zero as the lags between past and future become very large. The codifference, in particular, decreases to zero like a power function as the lag goes to infinity. Two parameters play an important role: (a) the value of the exponent, which depends on αα and measures the speed of the decay; (b) a multiplicative constant of asymptoticity cc which depends also on αα. In this paper, it is shown that for symmetric αα-stable log-FSN, the constant cc is positive and that the rate of decay of the codifference is such that one has long-range dependence. It is also proved that the same conclusion holds for the second measure of dependence, the covariation, which converges to zero with the same intensity and with a constant of asymptoticity which is positive as well.  相似文献   

17.
This paper develops an asymptotic theory for test statistics in linear panel models that are robust to heteroskedasticity, autocorrelation and/or spatial correlation. Two classes of standard errors are analyzed. Both are based on nonparametric heteroskedasticity autocorrelation (HAC) covariance matrix estimators. The first class is based on averages of HAC estimators across individuals in the cross-section, i.e. “averages of HACs”. This class includes the well known cluster standard errors analyzed by Arellano (1987) as a special case. The second class is based on the HAC of cross-section averages and was proposed by Driscoll and Kraay (1998). The ”HAC of averages” standard errors are robust to heteroskedasticity, serial correlation and spatial correlation but weak dependence in the time dimension is required. The “averages of HACs” standard errors are robust to heteroskedasticity and serial correlation including the nonstationary case but they are not valid in the presence of spatial correlation. The main contribution of the paper is to develop a fixed-b asymptotic theory for statistics based on both classes of standard errors in models with individual and possibly time fixed-effects dummy variables. The asymptotics is carried out for large time sample sizes for both fixed and large cross-section sample sizes. Extensive simulations show that the fixed-b approximation is usually much better than the traditional normal or chi-square approximation especially for the Driscoll-Kraay standard errors. The use of fixed-b critical values will lead to more reliable inference in practice especially for tests of joint hypotheses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号