首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a probabilistic approach to the problem of assigning k indivisible identical objects to a set of agents with single-peaked preferences. Using the ordinal extension of preferences we characterize the class of uniform probabilistic rules by Pareto efficiency, strategy-proofness, and no-envy. We also show that in this characterization no-envy cannot be replaced by anonymity. When agents are strictly risk averse von Neumann-Morgenstern utility maximizer, then we reduce the problem of assigning k identical objects to a problem of allocating the amount k of an infinitely divisible commodity.  相似文献   

2.
Although the idea that buyer–supplier partnerships can yield considerable benefits to firms is largely diffused among researchers and practitioners, the approach adopted in this paper is that no “one best way” exists in buyer–supplier relationships, but rather a “best way” for each specific exchange context. Hence, this paper proposes a contingency model for shaping and managing buyer–supplier relationships in manufacturing contexts. In order to test the model, an empirical study was performed on a sample of 45 buyer–supplier relationships within the Italian white goods industry. A three-dimensional performance indicator was computed to compare supplier performance achieved within relations matching the model's suggestions with those set differently. The results strongly suggest that suppliers involved in relationships set accordingly to the contingency model are likely to enjoy superior performance.  相似文献   

3.
Using the measure of risk aversion suggested by Kihlstrom and Mirman [Kihlstrom, R., Mirman, L., 1974. Risk aversion with many commodities. Journal of Economic Theory 8, 361–388; Kihlstrom, R., Mirman, L., 1981. Constant, increasing and decreasing risk aversion with many commodities. Review of Economic Studies 48, 271–280], we propose a dynamic consumption-savings–portfolio choice model in which the consumer-investor maximizes the expected value of a non-additively separable utility function of current and future consumption. Preferences for consumption streams are CES and the elasticity of substitution can be chosen independently of the risk aversion measure. The additively separable case is a special case. Because choices are not dynamically consistent, we follow the “consistent planning” approach of Strotz [Strotz, R., 1956. Myopia and inconsistency in dynamic utility maximization. Review of Economic Studies 23, 165–180] and also interpret our analysis from the game theoretic perspective taken by Peleg and Yaari [Peleg, B., Yaari, M., 1973. On the existence of a consistent course of action when tastes are changing. Review of Economic Studies 40, 391–401]. The equilibrium of the Lucas asset pricing model with i.i.d. consumption growth is obtained and the equity premium is shown to depend on the elasticity of substitution as well as the risk aversion measure. The nature of the dependence is examined. Our results are contrasted with those of the non-expected utility recursive approach of Epstein–Zin and Weil.  相似文献   

4.
This article examines the link between local government fragmentation, or “Tiebout choice,” and segregation between black and white residents. As suggested by Tiebout [Tiebout, C., 1956. A pure theory of local public expenditures. Journal of Political Economy 64, 416–424.], fragmented local governance structures may encourage households to vote with their feet and sort into communities based on their willingness to pay for local public services. This outcome has been well documented. The nuance explored here is that, if the demand for local public services varies by race or if households have preferences for neighbors with specific racial characteristics, local government fragmentation may foster an increase in residential segregation by race across neighborhoods and jurisdictions. Results from metropolitan-level regressions suggest that increased Tiebout choice is associated with increases in black–white residential segregation within US metropolitan areas. Comparable results are obtained from household-level estimates, where the black racial composition of a household's census tract of residence is regressed on household-level controls and racially stratified measures of Tiebout choice. Results from both approaches suggest that a 10% increase in Tiebout choice would increase neighborhood segregation by no more than 1%, while segregation across jurisdictions would increase by between 4% and 7%.  相似文献   

5.
This paper explores interactive epistemology within Morris’ [S. Morris, Alternative definitions of knowledge, in: M.O.L. Bacharach, L.-A. Gerard-Varet, P. Mongin, H.S. Shin (Eds.), Epistemic Logic and the Theory of Games and Decisions, Kluwer Academic Publishers, Amsterdam, 1997, pp. 217–233] framework of knowledge. Specifically, this paper proves a generalized “agreeing to disagree” result. The major features of this formalization are: (i) non-expected utility receives a unified treatment; (ii) the information structure is not necessarily partitional; (iii) Aumann’s celebrated result of “agreeing to disagree” and Milgrom and Stokey’s well-known result of “no trade” are derived as special cases. This paper also presents some new extensions of the “no trade theorem”.  相似文献   

6.
We consider several notions of setwise stability for many-to-many matching markets with contracts and provide an analysis of the relations between the resulting sets of stable allocations for general, substitutable, and strongly substitutable preferences. Apart from obtaining “set inclusion results” on all three domains, we introduce weak setwise stability as a new stability concept and prove that for substitutable preferences the set of pairwise stable matchings is nonempty and coincides with the set of weakly setwise stable matchings. For strongly substitutable preferences the set of pairwise stable matchings coincides with the set of setwise stable matchings.  相似文献   

7.
In this paper, some new indices for ordinal data are introduced. These indices have been developed so as to measure the degree of concentration on the “small” or the “large” values of a variable whose level of measurement is ordinal. Their advantage in relation to other approaches is that they ascribe unequal weights to each class of values. Although, they constitute a useful tool in various fields of applications, the focus here is on their use in sample surveys and specifically in situations where one is interested in taking into account the “distance” of the responses from the “neutral” category in a given question. The properties of these indices are examined and methods for constructing confidence intervals for their actual values are discussed. The performance of these methods is evaluated through an extensive simulation study.  相似文献   

8.
A policy maker is asked a few simple questions about his preference. Then the model represents it by a quadratic utility function, which can be made monotonic and quasi-concave (= to provide the convexity of the preference). The design of the interview with a policy maker is aimed at attaining the following goals: (a) no ambiguous output (= degeneration of the model), (b) ordinal approach to preferences (= asking questions about ordinal preferences and providing the uniqueness of the ordinal preference at the model output, regardless of its representation by a quadratic utility function), (c) stability of the model (= the model's input–output transformation is continuous). We also describe briefly the implementation of our model in a user-friendly interface to a corresponding computer program.  相似文献   

9.
We study Pareto efficiency in a setting that involves two kinds of uncertainty: Uncertainty over the possible outcomes is modeled using lotteries whereas uncertainty over the agents’ preferences over lotteries is modeled using sets of plausible utility functions. A lottery is universally Pareto undominated if there is no other lottery that Pareto dominates it for all plausible utility functions. We show that, under fairly general conditions, a lottery is universally Pareto undominated iff it is Pareto efficient for some vector of plausible utility functions, which in turn is equivalent to affine welfare maximization for this vector. In contrast to previous work on linear utility functions, we use the significantly more general framework of skew-symmetric bilinear (SSB) utility functions as introduced by Fishburn (1982). Our main theorem generalizes a theorem by Carroll (2010) and implies the ordinal efficiency welfare theorem. We discuss three natural classes of plausible utility functions, which lead to three notions of ordinal efficiency, including stochastic dominance efficiency, and conclude with a detailed investigation of the geometric and computational properties of these notions.  相似文献   

10.
This paper provides a formal justification for the existence of subjective random components intrinsic to the outcome evaluation process of decision makers and explicitly assumed in the stochastic choice literature. We introduce the concepts of admissible error function and generalized certainty equivalent, which allow us to analyze two different criteria, a cardinal and an ordinal one, when defining suitable approximations to expected utility values. Contrary to the standard literature requirements for irrational preferences, adjustment errors arise in a natural way within our setting, their existence following directly from the disconnectedness of the range of the utility functions. Conditions for the existence of minimal errors are also studied. Our results imply that neither the cardinal nor the ordinal criterion do necessarily provide the same evaluation for two or more different prospects with the same expected utility value. As a consequence, a rational decision maker may define two different generalized certainty equivalents when presented with the same prospect in two different occasions.  相似文献   

11.
This paper replacesGibbard’s (Econometrica 45:665-681, 1977) assumption of strict ordinal preferences by themore natural assumption of cardinal preferences on the set pure social alternatives and we also admit indifferences among the alternatives. By following a similar line of reasoning to the Gibbard-Satterthwaite theoremin the deterministic framework, we first show that if a decision scheme satisfies strategy proofness and unanimity, then there is an underlying probabilistic neutrality result which generates an additive coalitional power function. This result is then used to prove that a decision scheme which satisfies strategy proofness and unanimity can be represented as a weak random dictatorship. A weak random dictatorship assigns each individual a chance to be a weak dictator. An individual has weak dictatorial power if the support of the social choice lottery is always a subset of his/her maximal utility set. In contrast to Gibbard’s complete characterization of randomdictatorship, we also demonstrate with an example that strategy proofness and unanimity are sufficient but not necessary conditions for a weak random dictatorship.  相似文献   

12.
The problem of option hedging in the presence of proportional transaction costs can be formulated as a singular stochastic control problem. Hodges and Neuberger [1989. Optimal replication of contingent claims under transactions costs. Review of Futures Markets 8, 222–239] introduced an approach that is based on maximization of the expected utility of terminal wealth. We develop a new algorithm to solve the corresponding singular stochastic control problem and introduce a new approach to option hedging which is closer in spirit to the pathwise replication of Black and Scholes [1973. The pricing of options and corporate liabilities. Journal of Political Economy 81, 637–654]. This new approach is based on minimization of a Black–Scholes-type measure of pathwise risk, defined in terms of a market delta, subject to an upper bound on the hedging cost. We provide an efficient backward induction algorithm for the problem of cost-constrained risk minimization, whose associated singular stochastic control problem is shown to be equivalent to an optimal stopping problem. This algorithm is then modified to solve the singular stochastic control problem associated with utility maximization, which cannot be reduced to an optimal stopping problem. We propose to choose an optimal parameter (risk-aversion coefficient or Lagrange multiplier) in either approach by minimizing the mean squared hedging error and demonstrate that with this “best” choice of the parameter, both approaches have similar performance. We also discuss the different notions of risk in both approaches and propose a volatility adjustment for the risk-minimization approach, which is analogous to that introduced by Zakamouline [2006. European option pricing and hedging with both fixed and proportional transaction costs. Journal of Economic Dynamics and Control 30, 1–25] for the utility maximization approach, thereby providing a unified treatment of both approaches.  相似文献   

13.
George W. Torrance 《Socio》1976,10(3):129-136
Health state preferences measured on the general public provide useful information in their own right as well as being necessary data for the application of many health status index models. But, how should the preferences be measured? This paper reports the results of an empirical investigation in which three measurement techniques are applied to several samples of the general public to measure the social preferences for ten different health states. The standard gamble technique by von Neumann-Morgenstern, a time trade-off technique by the author, and a category scaling method are analyzed with respect to their feasibility, reliability, validity and comparability.  相似文献   

14.
We unify and generalize the existence results in Werner [Werner, J., 1987. Arbitrage and the existence of competitive equilibrium. Econometrica 55 (6), 1403–1418], Dana et al. [Dana, R.-A., Le Van, C., Magnien, F., 1999. On the different notions of arbitrage and existence of equilibrium. Journal of Economic Theory 87 (1), 169–193], Allouch et al. [Allouch, N., Le Van, C., Page Jr., F.H., 2006. Arbitrage and equilibrium in unbounded exchange economies with satiation. Journal of Mathematical Economics 42 (6), 661–674], Allouch and Le Van [Allouch, N., Le Van, C., 2008. Erratum to “Walras and dividends equilibrium with possibly satiated consumers”. Journal of Mathematical Economics 45 (3–4), 320–328]. We also show that, in terms of weakening the set of assumptions, we cannot go too far.  相似文献   

15.
We examine time-consistent intertemporal price–quality discrimination by a durable goods monopolist, when there are a continuum of buyer demand-intensities with respect to product quality, and it is profitable for the monopolist to trade with the marginal buyer-type (i.e., the “gap” case). We show that along every subgame perfect equilibrium path, with probability 1, prices and qualities decline over time, and the market is completely and monotonically depleted according to buyer-type in a finite number of offers. But, unlike the fixed quality literature, the monopolist may randomize over price–quality offers along the equilibrium path. We also show that the Coase conjecture continues to be valid here, but in a form that is significantly different from the usual formulation. In the limit, as the time between offers evaporates, the monopolist makes a continuum of offers and perfectly screens the market. However, he effectively cannot price-discriminate, because the equilibrium profits converge to the complete “pooling” profits that would be made if the entire market had the marginal buyer-type’s valuation.  相似文献   

16.
The capital management problem posed by R.H. Strotz is analyzed for the case of the ‘naive’ planner who fails to anticipate changes in his own preferences. By imposing progressively stronger restrictions on the primitives of the problem – namely, the discounting function, the utility index function, and the investment technology – the planner's behavior is characterized first as the solution to an ordinary differential equation and then via explicit formulae. Inasmuch as these characterizations leave the discounting function essentially unrestricted, the theory can accommodate, in particular, decision makers who discount time according to the hyperbolic and ‘quasi-hyperbolic’ curves used in applied work and said to be supported by psychological studies. Comparative statics of the model are discussed, as are extensions of the analysis to allow for credit constraints, limited foresight, and partial commitment.  相似文献   

17.
Increasing human and social capital by applying job embeddedness theory   总被引:4,自引:0,他引:4  
Most modern lives are complicated. When employees feel that their organization values the complexity of their entire lives and tries to do something about making it a little easier for them to balance all the conflicting demands, the employees tend to be more productive and stay with those organizations longer. Job embeddedness captures some of this complexity by measuring both the on-the-job and off-the-job components that most contribute to a person's staying. Research evidence as well as ample anecdotal evidence (discussed here and other places) supports the value of using the job embeddedness framework for developing a world-class retention strategy based on corporate strengths and employee preferences.To execute effectively their corporate strategy, different organizations require different knowledge, skills and abilities from their people. And because of occupational, geographic, demographic or other differences, these people will have needs that are different from other organizations. For that reason, the retention program of the week from international consultants won’t always work. Instead, organizations need to carefully assess the needs/desires of their unique employee base. Then, these organizations need to determine which of these needs/desires they can address in a cost effective fashion (confer more benefits than the cost of the program). Many times this requires an investment that will pay off over a longer term – not just a quarter or even year. Put differently, executives will need to carefully understand the fully loaded costs of turnover (loss of tacit knowledge, reduced customer service, slowed production, lost contracts, lack of internal candidates to lead the organization in the future, etc., in addition to the obvious costs like recruiting, selecting and training new people). Then, these executives need to recognize the expected benefits of various retention practices. Only then can leaders make informed decisions about strategic investments in human and social capital.

Selected bibliography

A number of articles have influenced our thinking about the importance of connecting employee retention strategies to business strategies:
• R. W. Beatty, M. A. Huselid, and C. E. Schneier. “New HR Metrics: Scoring on the Business Scorecard,” Organizational Dynamics, 2003, 32 (2), 107–121.
• Bradach. “Organizational Alignment: The 7-S Model,” Harvard Business Review, 1998.
• J. Pfeffer. “Producing Sustainable Competitive Advantage Through the Effective Management of People,” Academy of Management Executive, 1995 (9), 1–13.
• C. J. Collins, and K. D. Clark. “Strategic Human Resources Practices and Top Management Team Social Networks: An Examination of the Role of HR Practices in Creating Organizational Competitive Advantage,” Academy of Management Journal, 2003, 46, 740–752.
The theoretical development and empirical support for the Unfolding Model of turnover are captured in the following articles:
• T. Lee, and T. Mitchell. “An Alternative Approach: The Unfolding Model of Voluntary Employee Turnover,” Academy of Management Review, 1994, 19, 57–89.
• B. Holtom, T. Mitchell, T. Lee, and E.Inderrieden. “Shocks as Causes of Turnover: What They Are and How Organizations Can Manage Them,” Human Resource Management, 2005, 44(3), 337–352.
The development of job embeddedness theory is captured in the following articles:
• T. Mitchell, B. Holtom, T. Lee, C. Sablynski, and M. Erez. “Why People Stay: Using Job Embeddedness to Predict Voluntary Turnover,” Academy of Management Journal, 2001, 44, 1102–1121.
• T. Mitchell, B. Holtom, and T. Lee. “How To Keep Your Best employees: The Development Of An Effective Retention Policy,” Academy of Management Executive, 2001, 15(4), 96–108.
• B. Holtom, and E. Inderrieden. “Integrating the Unfolding Model and Job Embeddedness To Better Understand Voluntary Turnover,” Journal of Managerial Issues, in press.
• D.G. Allen. “Do Organizational Socialization Tactics Influence Newcomer Embeddedness and Turnover?” Journal of Management, 2006, 32, 237–257.
Executive SummaryEmployee turnover is costly to organizations. Some of the costs are obvious (e.g., recruiting, selecting, and training expenses) and others are not so obvious (e.g., diminished customer service ability, lack of continuity on key projects, and loss of future leadership talent). Understanding the value inherent in attracting and keeping excellent employees is the first step toward investing systematically to build the human and social capital in an organization. The second step is to identify retention practices that align with the organization's strategy and culture. Through extensive research, we have developed a framework for creating this alignment. We call this theory job embeddedness. Across multiple industries, we have found that job embeddedness is a stronger predictor of important organizational outcomes, such as employee attendance, retention and performance than the best, well-known and accepted psychological explanations (e.g., job satisfaction and organizational commitment). The third step is to implement the ideas. Throughout this article we discuss examples from the Fortune 100 Best Companies to Work For and many others to demonstrate how job embeddedness theory can be used to build human and social capital by increasing employee retention.  相似文献   

18.
The paper examines the problem of the existence of equilibrium for the stochastic analogue of the von Neumann–Gale model of economic growth. The mathematical framework of the model is a theory of set-valued random dynamical systems defined by positive stochastic operators with certain properties of convexity and homogeneity. Existence theorems for equilibria in such systems may be regarded as generalizations of the Perron–Frobenius theorem on eigenvalues and eigenvectors of positive matrices. The known results of this kind are obtained under rather restrictive assumptions. We show that these assumptions can be substantially relaxed if one allows for randomization. The main result of the paper is an existence theorem for randomized equilibria. Some special cases (models defined by positive matrices) are considered in which the existence of pure equilibria can be established.  相似文献   

19.
We consider a model, in which two agents are engaged in two separate bargaining problems. We introduce a notion of bargaining weights (bargaining power), which is basically given by asymmetric versions of the Perles–Maschler bargaining solution. Thereby, we view bargaining power as ordinary goods that can be traded in an exchange economy. With equal initial endowment of bargaining power there exists a Walrasian equilibrium in this exchange economy such that the utility allocation in equilibrium coincides with the Perles–Maschler bargaining solution of the aggregate bargaining problem. Equilibrium prices are given by the primitives of the two bargaining problems.  相似文献   

20.
We present a necessary and sufficient condition for the norm properness of separable utility functions. The condition is illustrated with a variety of examples. The condition and the examples indicate that norm uniformly proper separable utility functions are much “closer” to linear utility functions than previously suspected.We also take this opportunity and present in a systematic and simplified manner the basic properties of separable utility functions that are scattered in a fragmented way throughout the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号