首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 942 毫秒
1.
This paper studies models where the optimal response functions under consideration are not increasing in endogenous variables, and weakly increasing in exogenous parameters. Such models include games with strategic substitutes, and include cases where additionally, some variables may be strategic complements. The main result here is that the equilibrium set in such models is a non-empty, complete lattice, if, and only if, there is a unique equilibrium. Indeed, for a given parameter value, a pair of distinct equilibria are never comparable. Therefore, with multiple equilibria, some of the established techniques for exhibiting increasing equilibria or computing equilibria that use the largest or smallest equilibrium, or that use the lattice structure of the equilibrium set do not apply to such models. Moreover, there are no ranked equilibria in such models. Additionally, the analysis here implies a new proof and a slight generalization of some existing results. It is shown that when a parameter increases, no new equilibrium is smaller than any old equilibrium. (In particular, in n-player games of strategic substitutes with real-valued action spaces, symmetric equilibria increase with the parameter.)   相似文献   

2.
This paper reviews six approaches to binary response (y1) structural forms with an endogenous regressor y2: (i) the two‐stage least squares estimator‐like substitution approach, (ii) the control function approach, (iii) the system reduced‐form approach, (iv) the artificial instrumental regressor approach, (v) the transformed‐response instrumental variable estimator approach and (vi) the classical maximum likelihood estimator approach. The applicability of the six methods differs greatly, depending on whether y2 is a continuously distributed random variable or a discrete transformation of a latent . We conduct a real‐data‐based simulation study, and provide an empirical illustration. Our overall recommendation is using (i) and (ii), as the others have undesirable features such as analytic complexity in (iii), computational difficulty in (iv) and (vi), and poor finite‐sample performance in (v).  相似文献   

3.
If there is a riskless asset, then the distribution of every portfolio is determined by its mean and variance if and only if the random returns are a linear transformation of a spherically distributed random vector. If there is no riskless asset, then the spherically distributed random vector is replaced by a random vector in which the last n ? 1 components are spherically distributed conditional on the first component, which has an arbitrary distribution. If the number of assets is infinite, then there must exist random variables m, v, y, where the distribution of y conditional on m and v is standard normal, such that every portfolio is distributed as some linear combination of m and vy. If there is a riskless asset, then m has zero variance. These distributions exhibit two-fund separability even if the utility function is not concave.  相似文献   

4.
This article considers the reform of a commodity tax system. Consumers' preferences over directions of tax reform are constructed from indirect utility functions. A Wicksellian decision procedure is used to define a dominance relation on the set of directions of change; direction x dominates direction y if and only if (a) everybody prefers x to y or (b) x is the status quo and at least one person prefers x to y. A number of characterizations of undominated directions of change are provided. A related unanimity rule procedure, which does not single out the status quo for special treatment, is also considered. Particular attention is paid to the issue of whether Wicksellian reforms preserve production efficiency. Remarks on the relationship between this work, previous work in optimal taxation theory, and social choice theory are also provided.  相似文献   

5.
6.
The discernment of relevant factors driving health care utilization constitutes one important research topic in health economics. This issue is frequently addressed through specification of regression models for health care use (y—often measured by number of doctor visits) including, among other covariates, a measure of self-assessed health (sah). However, the exogeneity of sah within those models has been questioned, due to the possible presence of unobservables influencing both y and sah, and because individuals’ health assessments may depend on the quantity of medical care received. This article addresses the possible simultaneity of (sah, y) by adopting a full information approach, through specification of the bivariate probability function (p.f.) of these discrete variables, conditional on a set of exogenous covariates (x). The approach is implemented with copula functions, which afford separate consideration of each variable margin and their dependence structure. The specification of the joint p.f. of (sah, y) enables estimation of several quantities of potential economic interest, namely features of the conditional p.f. of y given sah and x. The adopted models are estimated through maximum likelihood, with cross-sectional data from the Portuguese National Health Survey of 1998–1999. Estimates of the margins’ parameters do not vary much among different copula models, while, in accordance with theoretical expectations, the dependence parameter is estimated to be negative across the various joint models.  相似文献   

7.
Conclusions A particular aspect of the present paper is the introduction of specific policy measures for the government, whose behavior on the goods market was described in earlier work as purely exogenous, like in Malinvaud. In our context, the government appears as an active economic agent, acting at absorbing any excess supply or reducing any excess demand on the goods market. Though this behavior may look somewhat arbitrary, it has the advantage to force the state of the economy towards a SME if combined with natural endogenous behavior of the other agents! Furthermore, it does not contradict observed policies through which governments stimulate or restrain economic activity via purchases or fiscal and monetary policies. Perhaps alternative policies, like direct actions on the labor market by supplying (non-productive) jobs or unemployment compensations, could have done as well: this remains an open door for further research. The preceding feature also contrasts with the recent work done by V. Böhm (1978) in a macroeconomic set up. In this paper, he studies the stability of stationary Keynesian unemployment or stationary repressed inflation states but without imposing a particular policy on the government. Comparing his work with ours, it is easily verified that if the government would keep its consumption at the levelg *, the SME would be stable if the economy starts out in Keynesian unemployment and unstable if the economy starts out in repressed inflation, confirming Böhm's result.Our analysis is related to an earlier work of Archibald and Lipsey (1958), dealing with the adjustment of the economy to a Stationary Equilibrium after a change in real balances. The Quantity Theory of Money postulates that along a SME, a change in the price and the wage rate from (p,w) to (p,w) leads to an adjustment in the level of stationary money holdings from mi * to mi *, mi *=i * (p,w). In this paper, Archibald and Lipsey suggest that the economy follows a sequence of temporary market equilibria: Starting from a change in real balances,prices adjust at each period through a tâtonnement process so as to match supply and demand. Our paper proposes an alternative path: At each period,quantities adjust through a tâtonnement process at constant prices.This paper is a revised version of CORE Discussion Paper 7701. We wish to thank Paul Champsaur, Jacques Drèze, Werner Hildenbrand and Reinhard John for stimulating discussions. We are grateful to Volker Böhm for valuable comments and criticisms.Research supported by the Fonds National Belge de la Recherche Scientifique.  相似文献   

8.
Abstract

Aims: To evaluate the risk-of-hospitalization (ROH) models developed at Blue Cross Blue Shield of Louisiana (BCBSLA) and compare this approach to the DxCG risk-score algorithms utilized by many health plans.

Materials and Methods: Time zero for this study was December 31, 2016. BCBSLA members were eligible for study inclusion if they were fully insured; aged 80?years or younger; and had continuous enrollment starting on or before June 1, 2016, through time zero. Up to 2?years of historical claims data from time zero per patient was included for model development. Members were excluded if they had cancer, renal failure, or were admitted for hospice. The Blue Cross ROH models were developed using (1) regularized logistic regression and (2) random decision forests (a tree ensemble learning classification method). All models were generated using Scikit-learn: Machine Learning in Python. Prognostic capabilities of DxCG risk-score algorithms were compared to those of the Blue Cross models.

Results: When stratifying by the top 0.1% of members with the highest ROH, the Blue Cross logistic regression model had the highest area under the receiving operator characteristics curve (0.862) based on the result of 10-fold cross-validation. The Blue Cross random decision forests model had the highest positive predictive value (49.0%) and positive likelihood ratio (61.4), but sensitivity, specificity, negative predictive values, and negative likelihood ratios were similar across all four models.

Limitations: The Blue Cross ROH models were developed and evaluated using BCBSLA data, and predictive power may fluctuate if applied to other databases.

Conclusions: The predictability of the Blue Cross models show how member-specific, regional data can be used to accurately identify patients with a high ROH, which may allow healthcare workers to intervene earlier and subsequently reduce the healthcare burden for patients and providers.  相似文献   

9.
We revisit the “Coase theorem” through the lens of a cooperative game model which takes into account the assignment of rights among agents involved in a problem of social cost. We consider the case where one polluter interacts with many potential victims. Given an assignment or a mapping of rights, we represent a social cost problem by a cooperative game. A solution consists in a payoff vector. We introduce three properties for a mapping of rights. First, core compatibility indicates that the core of the associated cooperative games is nonempty. Second, Kaldor‐Hicks core compatibility indicates that there is a payoff vector in the core where victims are fully compensated for the damage once the negotiations are completed. Third, no veto power for a victim says that no victim has the power to veto an agreement signed by the rest of the society. We then demonstrate two main results. First, core compatibility is satisfied if and only if the rights are assigned either to the polluter or to the entire set of victims. Second, there is no mapping of rights satisfying Kaldor‐Hicks core compatibility and no veto power for a victim.  相似文献   

10.
This article develops multiobjective models of hospital decision making that incorporate the internal decision process in both a for‐profit and a non‐profit hospital (NPH). Predicted output and quality for an NPH differ from those for a for‐profit hospital under some conditions but converge under others. Convergence may be the result of a complex internal decision structure with decision control primarily by physicians, similar objectives across different organizational forms, or differing constraints. The mechanisms underlying these outcomes provide explanations for conflicting results in empirical studies of non‐profit and for‐profit hospitals and provide a different rationale for convergence than non‐profit response to competition from for‐profit hospitals. Understanding the source of convergence is important for policies directed toward the tax treatment of NPHs.(JEL D21, D23, I11, L3, L21)  相似文献   

11.
Measuring Beliefs in an Experimental Lost Wallet Game   总被引:1,自引:0,他引:1  
We measure beliefs in an experimental game. Player 1 may take x < 20 Dutch guilders, or leave it and let player 2 split 20 guilders between the players. We find that the higher is x (our treatment variable), the more likely is player 1 to take the x. Out of those who leave the x, many expect to get back less than x. There is no positive correlation between x and the amount y that 2 allocates to 1. However, there is positive correlation between y and 2's expectation of 1's expectation of y. Journal of Economic Literature Classification Numbers: C72, C92.  相似文献   

12.
We study a class of single-server queueing systems with a finite population size, FIFO queue discipline, and no balking or reneging. In contrast to the predominant assumptions of queueing theory of exogenously determined arrivals and steady state behavior, we investigate queueing systems with endogenously determined arrival times and focus on transient rather than steady state behavior. When arrival times are endogenous, the resulting interactive decision process is modeled as a non-cooperative n-person game with complete information. Assuming discrete strategy spaces, the mixed-strategy equilibrium solution for groups of n = 20 agents is computed using a Markov chain method. Using a 2 × 2 between-subject design (private vs. public information by short vs. long service time), arrival and staying out decisions are presented and compared to the equilibrium predictions. The results indicate that players generate replicable patterns of behavior that are accounted for remarkably well on the aggregate, but not individual, level by the mixed-strategy equilibrium solution unless congestion is unavoidable and information about group behavior is not provided.JEL Classification: C71, C92, D81  相似文献   

13.
Symmetric (3,2) simple games serve as models for anonymous voting systems in which each voter may vote “yes,” abstain, or vote “no,” the outcome is “yes” or “no,” and all voters play interchangeable roles. The extension to symmetric (j,2) simple games, in which each voter chooses from among j ordered levels of approval, also models some natural decision rules, such as pass–fail grading systems. Each such game is determined by the set of (anonymous) minimal winning profiles. This makes it possible to count the possible systems, and the counts suggest some interesting patterns. In the (3,2) case, the approach yields a version of May's Theorem, classifying all possible anonymous voting rules with abstention in terms of quota functions. In contrast to the situation for ordinary simple games these results reveal that the class of simple games with 3 or more levels of approval remains large and varied, even after the imposition of symmetry.  相似文献   

14.
We study the effects of introducing payouts on corporate debt and optimal capital structure in a structural credit risk model à la Leland (1994) . We find that increasing the payout parameter not only affects the endogenous bankruptcy level, which is decreased, but modifies the magnitude of a change on the endogenous failure level as a consequence of an increase in risk‐free rate, corporate tax rate, riskiness of the firm and coupon payments. This simple analytical framework is able to capture realistic insights about optimal leverage, spreads and default probabilities more in line with historical norms (if compared to Leland’s results) and closer to predictions obtained through more sophisticated models.  相似文献   

15.
The (physical) output adjustment model and the price adjustment model are presented. By the two models we quantitatively analyze the influences of alterations of one sectoral (physical) gross output and of one sectoral price on another sectoral (physical) gross output and on another sectoral price, respectively. Hence, a basic nature of the Ghosh inverse and a fundamental character of the monetary Leontief inverse are obtained. The proposition that a matrix of intermediate output (input) coefficients alters if the vector of output (price) adjustment coefficients is nontrivial holds, if and only if this matrix is C-irreducible. It is impossible that (i) the adaptation of output system causes all sectoral final output rates (or input multipliers) either to rise or to fall collectively, or (ii) an adjustment of price system causes all sectoral value-added rates (or output multipliers) either to increase or to decrease jointly. However, maybe (i) a change of output system enables some sectoral final output rates (or input multipliers) to rise (fall) and all others to be constant, and (ii) an alteration of price system enables some sectoral value-added rates (or output multipliers) to increase (decrease) and all others to be fixed, whose necessary and sufficient condition is that the matrix of intermediate output (or input) coefficients has at least one non-final (or non-initial) class. The proposition that the vector of final output rates (or input multipliers) changes if the vector of output adjustment coefficients is nontrivial is true, if and only if the matrix of intermediate output coefficients has only one final class. The proposition that the vector of value-added rates (or output multipliers) alters if the vector of price adjustment coefficients is nontrivial holds, if and only if the matrix of intermediate input coefficients has only one initial class. The necessary and sufficient conditions and the matching economic explanations for possibility and uniqueness of the economic adjustment that enables (i) all sectors to have a uniform final output rate (or input multiplier), and (ii) all sectors to have the same value-added rate (or output multiplier) are respectively given. I would like to thank an anonymous referee for helpful comments and suggestions.  相似文献   

16.
This paper addresses the impact of active labor market programs on interregional migration in Sweden. The purpose of the study is to determine the extent to which the programs, which provide training and labor market assistance to jobless individuals, induce participants to migrate. Analysis is based on data registers compiled in 1994 and 1995 by Statistics Sweden and the Labor Market Board of Sweden. The paper specifies and estimates a two‐equation model of participation and subsequent migration. The model, which is estimated by the method of maximum simulated likelihood, accounts for the role of program participation as an endogenous choice variable in the decision to migrate. In an attempt to capture the effect of migrant self‐selection, the estimation approach also controls for unobserved heterogeneity in the participation and migration equations. Results of the study indicate a significant positive impact of participation on subsequent mobility for males. This result is robust with respect to alternative specifications of the migration equation and alternative formulations of the model for program participation. For females, the evidence of program impacts is mixed and it appears to be sensitive to the statistical formulation of the model. (JEL J38, J61)  相似文献   

17.
Consider a decision problem under uncertainty for a decision maker with known (utility) payoffs over prizes. We say that an act is Choquet (Shafer, Bernoulli) rational if for some capacity (belief function, probability) over the set of states, it maximizes her “expected” utility. We show that an act may be Choquet rational without being Bernoulli rational, but it is Choquet rational if and only if it is Shafer rational. Journal of Economic Literature Classification Numbers: C72, D81.  相似文献   

18.
ABSTRACT

This article analyses the relationship between compatibility and innovation in markets with network effects using a model of competition with endogenous R&D, commercialization and compatibility. Compatibility is a mutual decision between firms and demand is partially dependent on overall consumption across compatible networks. Incumbent acquisition of an innovation or profit from entry provides entrepreneurs with an incentive for developing technological improvements and entrepreneurs receive greater returns if larger incumbents offer compatibility with their installed base. But for sufficiently weak network effects a large incumbent increases demand for its own product by denying compatibility to rivals. As a result, a credible threat of incompatibility reduces the entrepreneur's reserve to sell an innovation, but can also increase offers from smaller incumbents to acquire the innovation if it also avoids an incompatibility response from a larger incumbent. In response, entrepreneurs adjust their research effort in order to target a favourable compatibility regime that maximizes profit from entry or offers to acquire the innovation from incumbents. This leads to a complex relationship between the strength of network effects, innovation incentives, the entrepreneur's ambition for improvement and potentially disrupting the compatibility regime.  相似文献   

19.
Summary. Let be a continuous and convex weak order on the set of lotteries defined over a set Z of outcomes. Necessary and sufficient conditions are given to guarantee the existence of a set of utility functions defined on Z such that, for any lotteries p and q, The interpretation is simple: a conservative decision maker has an unclear evaluation of the different outcomes when facing lotteries. She then acts as if she were considering many expected utility evaluations and taking the worst one. Received: January 19, 2000; revised version: December 20, 2000  相似文献   

20.
Abstract

Composite measures that combine different types of indicators are widely used in medical research; to evaluate health systems, as outcomes in clinical trials and patient-reported outcome measurement. The potential advantages of such indices are clear. They are used to summarise complex data and to overcome the problem of evaluating new interventions when the most important outcome is rare or likely to occur far in the future. However, many scientists question the value of composite measures, primarily due to inadequate development methodology, lack of transparency or the likelihood of producing misleading results. It is argued that the real problems with composite measurement are related to their failure to take account of measurement theory and the absence of coherent theoretical models that justify the addition of the individual indicators that are combined into the composite index. All outcome measures must be unidimensional if they are to provide meaningful data. They should also have dimensional homogeneity. Ideally, a specification equation should be developed that can predict accurately how organisations or individuals will score on an index, based on their scores on the individual indicators that make up the measure. The article concludes that composite measures should not be used as they fail to apply measurement theory and, consequently, produce invalid and misleading scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号