首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
财政金融   3篇
计划管理   6篇
经济学   6篇
经济概况   1篇
  2015年   1篇
  2014年   1篇
  2011年   1篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2006年   1篇
  2005年   1篇
  2002年   2篇
  2001年   1篇
  1999年   1篇
  1998年   3篇
排序方式: 共有16条查询结果,搜索用时 171 毫秒
1.
We consider the problem of allocating a set of indivisible objects to agents in a fair and efficient manner. In a recent paper, Bogomolnaia and Moulin consider the case in which all agents have strict preferences, and propose the probabilistic serial (PS) mechanism; they define a new notion of efficiency, called ordinal efficiency, and prove that the probabilistic serial mechanism finds an envy-free ordinally efficient assignment. However, the restrictive assumption of strict preferences is critical to their algorithm. Our main contribution is an analogous algorithm for the full preference domain in which agents are allowed to be indifferent between objects. Our algorithm is based on a reinterpretation of the PS mechanism as an iterative algorithm to compute a “flow” in an associated network. In addition we show that on the full preference domain it is impossible for even a weak strategyproof mechanism to find a random assignment that is both ordinally efficient and envy-free.  相似文献   
2.
This paper considers the problem of procuring honest responses to estimate the population mean of sensitive quantitative characteristics. It aims at pointing out that potential difference between sub-populations seems to be unnoticed under the usual optional randomized response technique. To make an extension to cover all the situations, we propose two alternative survey procedures that enable us to estimate the mean and the variance unbiasedly. Estimation for the sensitivity level of a question is under consideration as well. Efficiency comparisons have also been carried out to study the performances of the proposed procedures.  相似文献   
3.
Early survey statisticians faced a puzzling choice between randomized sampling and purposive selection but, by the early 1950s, Neyman's design-based or randomization approach had become generally accepted as standard. It remained virtually unchallenged until the early 1970s, when Royall and his co-authors produced an alternative approach based on statistical modelling. This revived the old idea of purposive selection, under the new name of “balanced sampling”. Suppose that the sampling strategy to be used for a particular survey is required to involve both a stratified sampling design and the classical ratio estimator, but that, within each stratum, a choice is allowed between simple random sampling and simple balanced sampling; then which should the survey statistician choose? The balanced sampling strategy appears preferable in terms of robustness and efficiency, but the randomized design has certain countervailing advantages. These include the simplicity of the selection process and an established public acceptance that randomization is “fair”. It transpires that nearly all the advantages of both schemes can be secured if simple random samples are selected within each stratum and a generalized regression estimator is used instead of the classical ratio estimator.  相似文献   
4.
Summary. If total social income is fixed and a social planner is uninformed of the utility representations of different individuals, then Lerner showed that the social optimum is to equally distribute income across individuals. We show that the planner by the use of randomization can in some circumstances induce individuals to reveal information about the curvature of their utility functions and then use the information to move away from equality on average. However, whether this is optimal depends in part on unobservable beliefs of the planner. These may be viewed as an aspect of the planner's ethical judgements or as something entirely arbitrary. Received: January 11, 2000; revised version: June 26, 2001  相似文献   
5.
This paper deals with comparisons of low-discrepancy sequences in terms of actual performance through numerical computation for option pricing. For that purpose, we construct a variety of randomized low-discrepancy sequences based on classical low-discrepancy sequences. A randomization structure by coordinate-wise and digit-wise permutations proves to give excellent results regardless of the classical low-discrepancy sequences. This paper represents only the author’s personal opinion, and has absolutely nothing to do with his affiliation.  相似文献   
6.
This paper studies how to assign “monitors” to productive agents in order to generate signals about the agents' performance that are most useful from a contracting perspective. We show that if signals generated by the same monitor are negatively (positively) correlated, then the optimal monitoring assignment will be “focused” (“dispersed”). This holds because dispersed monitoring allows the firm to better utilize relative performance evaluation. On the other hand, if each monitor communicates only an aggregated signal to the principal, then focused monitoring is always optimal since aggregation undermines relative performance evaluation. We also study team‐based compensation and randomized monitoring assignments. In particular, we show that the firm can gain from randomizing the monitoring assignment, compared with the optimal linear deterministic contract. Furthermore, under randomization, the conditional expected utility for the agent is higher when the agent is not monitored compared with the case where the agent is monitored. That is, the chance of being monitored serves as a “stick” rather than a “carrot”.  相似文献   
7.
Endogenous formation of peer groups often plagues studies on peer effects. Exploiting quasi-random assignment of peers to individual students that takes place in middle schools of South Korea, we examine the existence and detailed structure of academic interactions among classroom peers. We find that mean achievement of one's peers is positively correlated with a student's performance (standardized mathematics test score). Employing IV methods, we show that such a relationship is causal: the improvement in peer quality enhances a student's performance. Quantile regressions reveal that weak students interact more closely with other weak students than with strong students; hence their learning can be delayed by the presence of worst-performing peers. In contrast, strong students are found to interact more closely with other strong students; hence their learning can be improved by the presence of best-performing peers. We also examine the implications of these findings for two class formation methods: ability grouping and mixing.  相似文献   
8.
This paper considers the necessity and sufficiency of multiple certainty equilibria for sunspot effects, and shows that neither implication is valid. This claim is made for models with incomplete markets and numeraire assets. First, I prove that a multiplicity of certainty equilibria is neither necessary nor sufficient for sunspot effects by way of two counter-examples. Second, I verify over an entire subset of economies that equilibrium with sunspot effects can never be characterized as a randomization over multiple certainty equilibria.  相似文献   
9.
A decade ago Fama and French [Fama, E.G., French, K.R., 1988. Permanent and temporary components of stock prices. J. Political Econ. 96 (2) 246–273] estimated that 40% of variation in stock returns was predictable over horizons of 3–5 yr, which they attributed to a mean reverting stationary component in prices. While it has been clear that the Depression and war years exert a strong influence on these estimates, it has not been clear whether the large returns of that period contribute to the information in the data or rather are a source of noise to be discounted in estimation. This paper uses the Gibbs-sampling-augmented randomization methodology to address the problem of heteroskedasticity in estimation of multi-period return autoregressions. Extending the sample period to 1995, we find little evidence of mean reversion. Examining subsamples, only 1926–1946 provides any evidence of mean reversion, while the post war period is characterized by mean aversion. A test of structural change suggests that this difference between pre and post war periods is significant.  相似文献   
10.
Randomization to treatment is fundamental to statistical control in the design of experiments. However randomization implies some uncertainty about treatment condition, and individuals differ in their preferences towards taking on risk. Since human subjects often volunteer for experiments or are allowed to drop out of the experiment at any time if they want to, it is possible that the sample observed in an experiment might be biased because of the risk of randomization. On the other hand, the widespread use of a guaranteed show-up fee that is non-stochastic may generate sample selection biases of the opposite direction, encouraging more risk averse samples into experiments. We directly test these hypotheses that risk attitudes play a role in sample selection. Our results suggest that randomization bias does affect the overall level of risk aversion in the sample we observe, but that it does not affect the demographic mix of risk attitudes in the sample. We show that the common use of non-stochastic show-up fees can generate samples that are more risk averse than would otherwise have been observed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号