首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
We present an alternative proof of the Gibbards random dictatorship theorem with ex post Pareto optimality. Gibbard(1977) showed that when the number of alternatives is finite and larger than two, and individual preferences are linear (strict), a strategy-proof decision scheme (a probabilistic analogue of a social choice function or a voting rule) is a convex combination of decision schemes which are, in his terms, either unilateral or duple. As a corollary of this theorem (credited to H. Sonnenschein) he showed that a decision scheme which is strategy-proof and satisfies ex post Pareto optimality is randomly dictatorial. We call this corollary the Gibbards random dictatorship theorem. We present a proof of this theorem which is direct and follows closely the original Gibbards approach. Focusing attention to the case with ex post Pareto optimality our proof is more simple and intuitive than the original Gibbards proof.Received: 15 October 2001, Accepted: 23 May 2003, JEL Classification: D71, D72Yasuhito Tanaka: The author is grateful to an anonymous referee and the Associate editor of this journal for very helpful comments and suggestions. And this research has been supported by a grant from the Zengin Foundation for Studies on Economics and Finance in Japan.  相似文献   

2.
A statistical treatment of the problem of division   总被引:1,自引:0,他引:1  
The problem of division is one of the most important problems in the emergence of probability. It has been long considered solved from a probabilistic viewpoint. However, we do not find the solution satisfactory. In this study, the problem is recasted as a statistical problem. The outcomes of matches of the game are considered as an infinitely exchangeable random sequence and predictors/estimators are constructed in light of de Finetti representation theorem. Bounds of the estimators are derived over wide classes of priors (mixing distributions). We find that, although conservative, the classical solutions are justifiable by our analysis while the plug-in estimates are too optimistic for the winning player.Acknowledgement. The authors would like to thank the referees for the insightful and informative suggestions and, particularly, for referring us to important references.Supported by NSC-88-2118-M-259-009.Supported in part by NSC 89-2118-M-259-012.Received August 2002  相似文献   

3.
A normality assumption is usually made for the discrimination between two stationary time series processes. A nonparametric approach is desirable whenever there is doubt concerning the validity of this normality assumption. In this paper a nonparametric approach is suggested based on kernel density estimation firstly on (p+1) sample autocorrelations and secondly on (p+1) consecutive observations. A numerical comparison is made between Fishers linear discrimination based on sample autocorrelations and kernel density discrimination for AR and MA processes with and without Gaussian noise. The methods are applied to some seismological data.  相似文献   

4.
Probability theory in fuzzy sample spaces   总被引:2,自引:0,他引:2  
This paper tries to develop a neat and comprehensive probability theory for sample spaces where the events are fuzzy subsets of The investigations are focussed on the discussion how to equip those sample spaces with suitable -algebras and metrics. In the end we can point out a unified concept of random elements in the sample spaces under consideration which is linked with compatible metrics to express random errors. The result is supported by presenting a strong law of large numbers, a central limit theorem and a Glivenko-Cantelli theorem for these kinds of random elements, formulated simultaneously w.r.t. the selected metrics. As a by-product the line of reasoning, which is followed within the paper, enables us to generalize as well as to bring together already known results and concepts from literature.Acknowledgement. The author would like to thank the participants of the 23rd Linz Seminar on Fuzzy Set Theory for the intensive discussion of the paper. Especially he is indebted to Professors Diamond and Höhle whose remarks have helped to get deeper insights into the subject. Additionally, the author is grateful to one anonymous referee for careful reading and valuable proposals which have led to an improvement of the first draft.This paper was presented at the 23rd Linz Seminar on Fuzzy Set Theory, Linz, Austria, February 5–9, 2002.  相似文献   

5.
Ansgar Steland 《Metrika》2004,60(3):229-249
Motivated in part by applications in model selection in statistical genetics and sequential monitoring of financial data, we study an empirical process framework for a class of stopping rules which rely on kernel-weighted averages of past data. We are interested in the asymptotic distribution for time series data and an analysis of the joint influence of the smoothing policy and the alternative defining the deviation from the null model (in-control state). We employ a certain type of local alternative which provides meaningful insights. Our results hold true for short memory processes which satisfy a weak mixing condition. By relying on an empirical process framework we obtain both asymptotic laws for the classical fixed sample design and the sequential monitoring design. As a by-product we establish the asymptotic distribution of the Nadaraya-Watson kernel smoother when the regressors do not get dense as the sample size increases.Acknowledgements The author is grateful to two anonymous referees for their constructive comments, which improved the paper. One referee draws my attention to Lifshits paper. The financial support of the Collaborative Research Centre Reduction of Complexity in Multivariate Data Structures (SFB 475) of the German Research Foundation (DFG) is greatly acknowledged.  相似文献   

6.
A critical discussion of a comparative growth analysis about Central and Eastern European (CEE) countries is performed. The main conclusion is that there was economic convergence for most CEE accession candidates, but not between them and Western Europe. Results do justify a separation into first and second-wave accession countries, but also undermine differences in Central and Eastern Europe between accession and non-accession countries.This paper critically examines theories and empirical studies for three types of convergence, namely , and club convergence. Each can be in absolute terms or conditional to the long-term equilibrium (steady state) for each country.Empirical results are provided for all types of convergence from 1996 to 2000, both with population-weighted and non-weighted data. The analysis is performed for differently framed country subgroups considering even Western Europe for better comparability. Once absolute convergence is found through a unit root test about a standard deviation time series of cross-sectional income per capita, the regression coefficient for initial income per capita with the average growth over the sample period as dependent variable ( convergence) establishes the speed of this process. The same method applies to the conditional version by using the distance of the income from the corresponding steady state instead of the level of GDP. Then Markov chain probability matrixes (club convergence) provide information about the past behaviour of the whole cross-sectional income distribution over time, but also about intra-mobility of single countries.  相似文献   

7.
Elfvings method is a well known graphical procedure for obtaining c-optimal designs. Although the method is valid for any dimension it is rarely used for more than two parameters. Its usefulness seems to be constrained by the difficulty on the construction of the Elfving set. In this paper a computational procedure for finding c-optimal designs using Elfvings method for more than two dimensions is provided. It is suitable for any model, achieving an efficient performance for three or four dimensional models. The procedure can be implemented in most programming languages.Acknowledgments. The authors would like to thank Dr. Torsney for his helpful comments. This research was supported by a grant from JCyL SA004/01.  相似文献   

8.
We consider a multiple testing problem based on an i.i.d. sample of K-dimensional observations. We want to test whether at least one of the unknown means is positive. We propose a sequential test which is of the nature of a multiple truncated sequential probability ratio test. We asymptotically analyse the expected sample size and compare it to the sample sizes which arise when one looks at effects separately.  相似文献   

9.
We consider the uniformly most powerful unbiased (UMPU) one-sided test for the comparison of two proportions based on sample sizes m and n, i.e., the randomized version of Fisher's exact one-sided test. It will be shown that the power function of the one-sided UMPU-test based on sample sizes m and n can coincide with the power function of the UMPU-test based on sample sizes m+1 and n for certain levels on the entire parameter space. A characterization of all such cases with identical power functions is derived. Finally, this characterization is closely related to number theoretical problems concerning Fermat-like binomial equations. Some consequences for Fisher's original exact test will be discussed, too.  相似文献   

10.
The interest in Data Envelopment Analysis (DEA) as a method for analyzing the productivity of homogeneous Decision Making Units (DMUs) has significantly increased in recent years. One of the main goals of DEA is to measure for each DMU its production efficiency relative to the other DMUs under analysis. Apart from a relative efficiency score, DEA also provides reference DMUs for inefficient DMUs. An inefficient DMU has, in general, more than one reference DMU, and an efficient DMU may be a reference unit for a large number of inefficient DMUs. These reference and efficiency relations describe a net which connects efficient and inefficient DMUs. We visualize this net by applying Sammons mapping. Such a visualization provides a very compact representation of the respective reference and efficiency relations and it helps to identify for an inefficient DMU efficient DMUs respectively DMUs with a high efficiency score which have a similar structure and can therefore be used as models. Furthermore, it can also be applied to visualize potential outliers in a very efficient way.JEL Classification: C14, C61, D24, M2  相似文献   

11.
12.
13.
In this article we analyse the number of no opinion answers to attitude items. We argue that that number can be considered as a count variable that should be analysed using Poisson regression or negative binomial regression. Since we're interested in the effect of both respondent and interviewer characteristics on the number of no opinion's we use multilevel analysis that takes into account the hierarchical structure of the data. As a consequence multilevel Poisson regression and multilevel negative binomial regression are applied. Our analysis shows that answering no opinion is related to some sociodemographic respondent characteristics. In addition we find a significant interviewer effect, but we are not able to explain that effect in terms of interviewer variables.  相似文献   

14.
15.
Consider N independent stochastic processes \((X_i(t), t\in [0,T])\), \(i=1,\ldots , N\), defined by a stochastic differential equation with random effects where the drift term depends linearly on a random vector \(\Phi _i\) and the diffusion coefficient depends on another linear random effect \(\Psi _i\). For these effects, we consider a joint parametric distribution. We propose and study two approximate likelihoods for estimating the parameters of this joint distribution based on discrete observations of the processes on a fixed time interval. Consistent and \(\sqrt{N}\)-asymptotically Gaussian estimators are obtained when both the number of individuals and the number of observations per individual tend to infinity. The estimation methods are investigated on simulated data and show good performances.  相似文献   

16.
Anna Dembińska 《Metrika》2017,80(3):319-332
Assume that a sequence of observations \((X_n; n\ge 1)\) forms a strictly stationary process with an arbitrary univariate cumulative distribution function. We investigate almost sure asymptotic behavior of proportions of observations in the sample that fall into a random region determined by a given Borel set and a sample quantile. We provide sufficient conditions under which these proportions converge almost surly and describe the law of the limiting random variable.  相似文献   

17.
This paper uses Monte Carlo experimentation to investigate the finite sample properties of the maximum likelihood (ML) and corrected ordinary least squares (COLS) estimators of the half-normal stochastic frontier production function. Results indicate substantial bias in both ML and COLS when the percentage contribution of inefficiency in the composed error (denoted by *) is small, and also that ML should be used in preference to COLS because of large mean square error advantages when * is greater than 50%. The performance of a number of tests of the existence of technical inefficiency is also investigated. The Wald and likelihood ratio (LR) tests are shown to have incorrect size. A one-sided LR test and a test of the significance of the third moment of the OLS residuals are suggested as alternatives, and are shown to have correct size, with the one-sided LR test having the better power of the two.The author would like to thank Bill Griffiths, George Battese, Howard Doran, Bill Greene and two anonymous referees for valuable comments. Any errors which remain are those of the author.  相似文献   

18.
Dr. A. Irle 《Metrika》1980,27(1):15-28
A continuous-time stochastic process occuring in a representation of the likelihood ratio process for Gaussian processes with common covariance kernel is shown to be a Wiener process with respect to a certain family of -algebras. This is applied to the problem of sequentially testing one-sided hypotheses for Gaussian processes, and it is proved that certain continuous-time SPRT's are locally best sequential tests under some restrictions.  相似文献   

19.
Abstract

This paper discusses the maximum likelihood estimator of a general unbalanced spatial random effects model with normal disturbances, assuming that some observations are missing at random. Monte Carlo simulations show that the maximum likelihood estimator for unbalanced panels performs well and that missing observations affect mainly the root mean square error. As expected, these estimates are less efficient than those based on the unobserved balanced model, especially if the share of missing observations is large or spatial autocorrelation in the error terms is pronounced.

Estimation de vraisemblance maximale d'un modèle général d'effets aléatoires spatiaux déséquilibré: une étude Monte Carlo

RÉSUMÉ La présente communication se penche sur l'estimateur du maximum de vraisemblance d'un modèle général d'effets aléatoires spatiaux déséquilibré avec des perturbations normales, en supposant l'absence aléatoire de certaines observations. Des simulations de Monte Carlo montrent que des groupes déséquilibrés se comporte bien, et que les observations manquantes affectent principalement l'erreur de la moyenne quadratique. Comme prévu, ces évaluations sont moins efficaces que celles qui sont basées sur le modèle équilibré non observé, notamment si la part des observations manquantes est importantes, ou l'on déclare une autocorrélation spatiale dans les termes d'erreur.

Estimación de la probabilidad máxima de un modelo espacial general desequilibrado de efectos al azar: un estudio de Monte Carlo

RÉSUMÉN Este trabajo discute el estimador de probabilidad máxima de un modelo espacial general desequilibrado de efectos al azar con alteraciones normales, suponiendo que faltan algunas observaciones al azar. Las simulaciones de Monte Carlo muestran que el estimador de probabilidad máxima para los paneles desequilibrados funciona satisfactoriamente, y que las observaciones omisas afectan principalmente al error de la media cuadrática. Como se suponía, estas estimaciones son menos eficientes que las basadas en el modelo equilibrado inadvertido, especialmente si la cantidad de omisiones es grande/o la autocorrelación en los términos de error es pronunciada.

  相似文献   

20.
This paper provides empirical evidence suggesting that fundamentals matter for stock price fluctuations once temporal instability underpinning stock price-relations is accounted for. Specifically, this study extends the out-of sample forecasting methodology of Meese and Rogoff (J Int Econ 14:3–24 (1983)) to the stock market after explicitly testing for parameter nonconstancy using recursive techniques. The predictive ability of a present value model based on Imperfect Knowledge Economics (IKE) is found to match that of the pure random walk benchmark at short forecasting horizons and to perform significantly better at medium to longer-run horizons based on conventional measures of predictability and direction of change statistics. In addition, the presence of a cointegrating relation is found only within regimes of statistical parameter constancy. Augmenting the MR methodology in a piecewise linear fashion yields empirical results in favor of a fundamentals-based account of stock price behavior overturning the recent results of Flood and Rose (2010).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号