首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We study the performance of voting systems in terms of minimizing the overall social disutility of making a collective choice in an univariate voting space with ideological voting and perfect information. In order to obtain a distribution of the performance indicator for each of the 12 systems chosen for this study—Baldwin’s Method, Black’s Method, The Borda Count, Bucklin’s Grand Junction System, Coombs’ Method, Dodgson’s System, Instant Run-Off Voting, Plurality, Simpson’s MinMax, Tideman’s Ranked Pairs, Schulze’s Beatpath Method, and Two-Round Majority—we simulate elections using an Agent-Based Computational approach under several different distributions for voters and candidates positioning, with up to 15 available candidates. At each iteration, voters generate complete and strict ordinal utility functions over the set of available candidates, based on which each voting system computes a winner. We define the performance of a system in terms of its capability of choosing among the available candidates the one that minimizes aggregate voter disutility. As expected, the results show an overall dominance of Condorcet completion methods over the traditional and more widely used voting systems, regardless of the distributions of voter and candidate positions.  相似文献   

2.
As China enters the twenty-first century the health of the agricultural economy will increasingly rely, not on the growth of inputs, but on the growth of total factor productivity (TFP). However, the tremendous changes in the sector—sometimes back and sometimes forwards—as well as evolving institutions make it difficult to gauge from casual observation if the sector is healthy or not. Research spending has waxed and waned. Policies to encourage the import of foreign technologies have been applied unevenly. Structural adjustment policies also triggered wrenching changes in the sector. Horticulture and livestock production has boomed; while the output of other crops, such as rice, wheat and soybeans, has stagnated or fallen. At a time when China’s millions of producers are faced with complex decisions, the extension system is crumbling and farmer professional associations remain in their infancy. In short, there are just as many reasons to be optimistic about the productivity trends in agriculture as to be pessimistic. In this paper, we pursue one overall goal: to better understand the productivity trends in China’s agricultural sector during the reform era—with an emphasis on the 1990–2004 period. To do so, we pursue three specific objectives. First, relying on the National Cost of Production Data Set—China’s most complete set of farm input and output data—we chart the input and output trends for 23 of China’s main farm commodities. Second, using a stochastic production frontier function approach we estimate the rate of change in TFP for each commodity. Finally, we decompose the changes in TFP into two components: changes in efficiency and changes in technical change. Our findings—especially after the early 1990s are remarkably consistent. China’s agricultural TFP has grown at a healthy rate for all 23 commodities. TFP growth for the staple commodities generally rose around 2% annually; TFP growth for most horticulture and livestock commodities was even higher (between 3 and 5%). Equally consistent, we find that most of the change is accounted for by technical change. The analysis is consistent with the conclusion that new technologies have pushed out the production functions, since technical change accounts for most of the rise in TFP. In the case of many of the commodities, however, the efficiency of producers—that is, the average distance of producers from the production frontier—has fallen. In other words, China’s TFP growth would have been even higher had the efficiency of production not eroded the gains of technical change. Although we do not pinpoint the source of rising inefficiency, the results are consistent with a story that there is considerable disequilibrium in the farm economy during this period of rapid structural change and farmers are getting little help in making these adjustments from the extension system.  相似文献   

3.
The interaction between a creditor and a sovereign debtor is described as a ‘one-shot’ game with discrete actions—total or no debt-repudiation and seizure of asset holding abroad. Possible Nash equilibria where each player chooses an action as to maximize his expected payoff given his beliefs about the other player’s action and the implications of those actions on the players’ trustworthy reputation are identified. However, if reputation losses rise convexly with the players’ relative hostility, partial repudiation and seizure can be the preferred strategies. The preferred repudiation and seizure rates are analyzed under asymmetric and symmetric information about the state of the world. (JEL classification F34)  相似文献   

4.
Subject of this paper is the analysis of consensus within small groups of respondents, based on a proportionally large number of variables. The target group is researchers who are interested in Q-mode research. Measures of agreement are compared, and an application from a recent project is presented. Cohen’s κ is the preferable measure, Krippendorff’s α is an alternative, which is based on a different concept of expected disagreement. At group level, along with κ and α for multiple raters, additional measures are r wg, intraclass correlation, and κ SC. Predictions about level differences between groups can be assessed by a t-test and θ  相似文献   

5.
From 1944 to 1986, 19 states held 27 referendums on right-to-work legislation, with 22.5 million people voting on the proposals. Despite its prominence as a public issue, most research on right-to-work laws focuses on their industrial relations impacts, and not on employees’ individual rights to refrain from joining unions or those same employees’ responsibilities to support their bargaining unit representative. Nor has there been any research on what citizen groups determine those rights and responsibilities in a right-to-work referendum. This study explores a potential operational model of anti-right-to-work voting with a multiple regression analysis of Missouri’s 1978 right-to-work election results, and hopes to serve as a stimulus to additional research on these particular dimensions of the right-to-work issue.  相似文献   

6.
In weak institutional settings, autocrats barter political and economic concessions for support to remain in power and extract rents. Instead of viewing the favors’ beneficiaries, i.e. the elites, as an exogenous entity, we allow the king to decide whom to coopt provided the subjects are heterogeneous in the potential support—their strength—they could bring to the regime. While the ruler can select the elites on the basis of their personal characteristics, an alternative strategy consists in introducing some uncertainty in the cooptation process. The latter strategy allows the king to reduce the clients’ cooptation price since in the event of a revolution the likelihood of being included in the future body of elites is lower. We show that weak rulers are more likely to coopt the society’s strongest individuals, while powerful rulers diversify the composition of their clientele. Moreover, when agents value more future discounted outcomes, the king is more likely to randomly coopt subjects.  相似文献   

7.
Implementing a social choice function is to endow the agents involved in a collective decision problem with a privately owned decision power, in such a way that by exercising (noncooperatively) this power the agents eventually select the very outcome recommended by the social choice function.In this paper, we show that the concept of dominance-solvable voting scheme allows the implementation of some of the most desirable social choice functions. Namely our main result is the following: If the number of agents n is a prime integer strictly greater than the number of outcomes p then there exists at least one efficient, anonymous and neutral social choice function, in short eanscf, that can be implemented by a dominance-solvable voting scheme. The result is proved constructively, i.e., by looking at a repeated version of voting by veto that is by itself an appealing voting scheme. The arithmetic condition (n should be prime and greater than p) is very natural (since it does not exist an eanscf unless every prime factor of n is greater than p).  相似文献   

8.
We analyze binary choices in a random utility model assuming that agent’s preferences are affected by conformism (with respect to the behavior of the society) and coherence (with respect to identity). We show that multiple stationary equilibria may arise and that the outcome looks very different from a society where all the agents take their decisions in isolation. We quantify the fraction of agents that behave coherently. We apply the analysis to sequential voting when voters “like to win”. Compared to the present literature, we enrich the setting assuming that each voter is endowed with an ideology and we consider the interplay between coherence and the desire to vote with the (perceived) majority.  相似文献   

9.
This paper takes a closer look at the conceptual grounds of the notion of causality in Granger’s sense. We start with the often jokingly made remark that ‘Christmas card sales Granger-cause Christmas’. Then, we extend the example to the more challenging case of chocolate Easter bunny sales and Easter. We show that any references to Granger-causality in these cases are due to the misinterpretation of the concept. Moving further on methodological grounds, we argue that the concept of Granger-causality calls for a multivariate framework of analysis. This is because taking all available relevant information into account is indeed required in Granger’s definition of causality. This is also in line with rational behaviour and learning under imperfect and incomplete information. The implications of employing a multivariate framework of analysis is discussed in terms of the additional insights it brings; namely, direct, indirect, and spurious cases of Granger-causality. Finally, we examine the semantics of the definition of causality in Granger’s sense.   相似文献   

10.
While International Entrepreneurship has attracted scholars’ attention during the last two decades, the impact cognitive aspects exert has been studied on cursory level only. The purpose of this paper was to apply the Theory of Planned Behavior (TPB) to the very field of International Entrepreneurship in order to examine whether this theory contributes to clarify what influences Small and Medium-sized Enterprises’ (SME) decision-makers’ intention—an important cognitive antecedent to behavior—to play an active part in internationalization. In particular, it had to be clarified whether or not International Entrepreneurship—due to its contextual specificities—deserves to be extended by further elements, i.e. experience and knowledge. Based on more than 100 responses from German SME executives, the study yielded several interesting results. First, TPB indeed helps explain how intentions to actively participate in international business are built. Second, an extension of the theory’s basic model seems to make sense, probably due to the specificities of international entrepreneurial behavior. As for the extensions, direct and moderating effects have been observed. Furthermore, cognitive elements seem to be key entrepreneurial resources which serve as sort of enablers. From these results several conclusions can be drawn. Cognitive aspects are a promising starting point for understanding decision-making in SME. Thus, the intersection of international entrepreneurship and entrepreneurial cognition deserves further attention—several examples for possible future studies are presented. Policies supporting SME should be extended: pure resource-based approaches seem to be insufficient. Furthermore, entrepreneurship courses and curricula should reflect the relevance of cognitive aspects.  相似文献   

11.
The main focus of this paper is on the development of utility measures for dynamic multi-state systems that have M + 1 discrete states of working efficiency. To develop the utility measures for multi-state systems, we assume that the degradation of the multi-state systems follows a non-homogeneous semi-Markov process together with a backward recurrence time process and that the system can directly degrade into any lower state. The measure considers the immediate utility derived from the state of the system occupancy and the utility due to the system survival expectations derived from its reliability. Moreover, in order to measure the customer’s cumulative utility, the discounted non-homogeneous continuous time semi-Markov process with rewards is implemented. In such a way, the higher order moments of the reward process can be evaluated and then, in particular, the expectation and the standard deviation of the consumer’s accumulated discounted utility. These statistic indices are useful both to the industry in order to compare different system designs offered by the market and to the market itself, in order to know the customer’s utility deriving from the goods she/he consumes. Finally, a numerical example shows the possibility of implementing the model in real-life problems.  相似文献   

12.
This paper discusses whether Local Monotonicity (LM) should be regarded as a property of the power distribution of a specific voting game under consideration, indicated by a power measure, or as a characteristic of power per se. The latter would require reasonable power measures to satisfy a corresponding LM axiom. The former suggests that measures which do not allow for a violation of LM fail to account for dimensions of power which can cause nonmonotonicity in voting weight. Only if a measure is able to indicate nonmonotonicity, it can help design voting games for which power turns out to be monotonic. The argument is discussed in the light of recent extensions of traditional power indices.  相似文献   

13.
In this paper we consider eight general models of independence, the Hájek–Šidák model, the Janssen–Mason model, Konijn’s model, Steffensen’s model, the Farlie model, the bivariate Gamma distribution, the Mardia model and the Frechet model. The asymptotic efficacies and relative efficiencies of various linear rank tests are computed. It turns out that the asymptotic power depends heavily on the underlying model. However, for the vast majority of considered models, the Spearman test is, asymptotically, a good choice.  相似文献   

14.
This paper applies Novak’s (1998) theory of learning to the problem of workplace bullying. Novak’s theory offers an understanding of how actions of bullying and responses to bullying can be seen as deriving from individualized conceptualizations of workplace bullying by those involved. Further, Novak’s theory suggests that training involving Ausubel’s concept of meaningful learning (Ausubel Educational Theory 11(1): 15–25, 1961; Ausubel et al. 1978) which attends to learners’ pre-existing knowledge and allows for new meaning to be constructed regarding workplace bullying can lead to new actions related to workplace bullying. Ideally, these new actions can involve both a reduction in workplace bullying prevalence, and responses to workplace bullying which recognize and are informed by the negative consequences of this workplace dynamic.  相似文献   

15.
In Fortiana and Grané (J Stat Plann Infer 108:85–97), we study a scale-free statistic, based on Hoeffding’s maximum correlation, for testing exponentiality. This statistic admits an expansion along a countable set of orthogonal axes, originating a sequence of statistics. Linear combinations of a given number p of terms in this sequence can be written as a quotient of L-statistics. In this paper, we propose a scale-free adaptive statistic for testing exponentiality with optimal power against a specific alternative and obtain its exact distribution. An empirical power study shows that the test based on this new statistic has the same level of performance than the best tests in the statistical literature.  相似文献   

16.
Recent evidence suggests that firms’ environments are becoming more complex and uncertain. This paper investigates the relationship between the complexity of a firm’s activities, environmental uncertainty and organizational structure. We assume agents are arranged hierarchically, but decisions can be made at different levels. We model a firm’s activity set as a modified NK landscape. Via simulations, we find that centralized decision making generates a higher payoff in more complex and uncertain environments, and that a flatter structure is better for the organization with centralized decision making, provided the cost of information processing is low enough. Financial Support from Zengin Foundation for Studies on Economics and Finance is gratefully acknowledged.  相似文献   

17.
We consider nonparametric estimation of multivariate versions of Blomqvist’s beta, also known as the medial correlation coefficient. For a two-dimensional population, the sample version of Blomqvist’s beta describes the proportion of data which fall into the first or third quadrant of a two-way contingency table with cutting points being the sample medians. Asymptotic normality and strong consistency of the estimators are established by means of the empirical copula process, imposing weak conditions on the copula. Though the asymptotic variance takes a complicated form, we are able to derive explicit formulas for large families of copulas. For the copulas of elliptically contoured distributions we obtain a variance stabilizing transformation which is similar to Fisher’s z-transformation. This allows for an explicit construction of asymptotic confidence bands used for hypothesis testing and eases the analysis of asymptotic efficiency. The computational complexity of estimating Blomqvist’s beta corresponds to the sample size n, which is lower than the complexity of most competing dependence measures.   相似文献   

18.
In this study, we look for empirical support for the hypothesis that there is a positive relationship between the levels of corporate governance quality across firms and the relative efficiency levels of these firms. This hypothesis is related to Leibenstein’s idea of X-efficiency. We use the data envelopment analysis (DEA) estimator to obtain proxies for X-[in]efficiency of firms in our sample and then analyze them with respect to different ownership structures by comparing distributions and aggregate efficiencies across different groups. We also use truncated regression with bootstrap, following Simar and Wilson Estimation and influence in two stage, semi-parametric models of production process, Simar and Zelenyuk (2003) to infer on relationship of inefficiency to various indicators of quality of corporate governance, ownership variables, as well as industry and year specific dummies. The data is coming from seven industries in Ukraine. “The entrepreneurship structure itself may be critical, with the classic issue of the separation of ownership from control being regarded as one of the earliest and most important sources of X-efficiency” (Button and Weyman-Jones, 1992, American Economic Review). We would like to dedicate this paper to the memory of Christos Panzios—co-editor who handled our paper to almost the very end, whose suggestions and encouragement have helped us substantially improve our paper.  相似文献   

19.
A decision problem—allocating public research and development (R&D) funding—is faced by a planner who has ambiguous knowledge of welfare effects of the various research areas. We model this as a reverse portfolio choice problem faced by a Bayesian decision-maker. Two elements of the planner’s inferential system are developed: a conditional distribution of welfare ‘returns’ on an allocation, given stated preferences of citizens for the different areas, and a minimum risk criterion for re-allocating these funds, given the performance of a status quo level of funding. A case study of Canadian public research funds expended on various applications of agricultural biotechnology is provided. The decision-making methodology can accommodate a variety of collective expenditure and resource allocation problems.  相似文献   

20.
When the merged union UNITE HERE was recently torn apart by internal dissent, the labor movement’s attention turned to some longstanding questions about how union mergers are negotiated, why some fail and others succeed, how members are affected by merger, and how the big, diverse unions created by mergers—the super-unions—manage to stay intact. This article addresses these questions, arguing throughout that little is actually known about the union merger process and outcomes. In doing so, it also suggests that some union mergers, such as the one forming UNITE HERE, may not always make sense and that bigger unions created by mergers are not necessarily better unions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号