首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider an offshore fishing grounds of size K. Suppose the grounds has been overfished to the point that net revenue has been driven to zero and the fishery is in open access equilibrium at (X, Y). A marine sanctuary, where fishing is prohibited, is then created. Suppose the marine sanctuary is of size K2 and that fishing is allowed on a smaller grounds, now of size K1, where K1 + K2 = K. In the first, deterministic, model, the present value of net revenue from the grounds-sanctuary system is maximized subject to migration (diffusion) of fish from the sanctuary to the grounds. The size of the sanctuary is varied, the system is re-optimized, and the populations levels, harvest, and value of the fishery is compared to the 'no-sanctuary' optimum, and the open access equilibrium. In the deterministic model, a marine sanctuary reduces the present value of the fishery relative to the 'ideal' of optimal management of the original grounds. In the second model net growth is subject to stochastic fluctuation. Simulation demonstrates the ability of a marine sanctuary to reduce the variation in biomass on the fishing grounds. Variance reduction in fishable biomass is examined for different-sized sanctuaries when net growth on the grounds and in the sanctuary fluctuate independently and when they are perfectly correlated. For the stochastic model of this paper, sanctuaries ranging in size from 60 to 40% of the original grounds (0.6 K2/K 0.4) had the ability to lower variation in fishable biomass compared to the no sanctuary case. For a sanctuary equal to or greater than 70% of the original grounds (K2 0.7K), net revenue would be nonpositive and there would be no incentive to fish.  相似文献   

2.
The standard linear model where ut is generated from an ARFIMA process, is considered. The sensitivity of the predictor and sensitivity of variance estimates of the linear model to long memory are investigated by constructing the statistical measures BL/S and DL/S , respectively. BL/S and DL/S is interpreted as a sensitivity measure for the long‐memory process without the short‐memory effects. As an application, the memory characteristics of per capita GDP of 30 countries are investigated from the Maddison GDP dataset. It is found that per‐capita GDP exhibits long memory characteristics, and the long‐run growth estimates are sensitive to the long memory characteristics.  相似文献   

3.
In an experiment using two-bidder first-price sealed bid auctions with symmetric independent private values and 400 subjects, we scan also the right hand of each subject. We study how the ratio of the length of the index and ring fingers (2D:4D) of the right hand, a measure of prenatal hormone exposure, is correlated with bidding behavior and total profits. 2D:4D has been reported to predict competitiveness in sports competition (Manning and Taylor in Evol. Hum. Behav. 22:61–69, 2001, and H?nekopp et al. in Horm. Behav. 49:545–549, 2006), risk aversion in lottery tasks (Dreber and Hoffman in Portfolio selection in utero. Stockholm School of Economics, 2007; Garbarino et al. in J. Risk Uncertain. 42:1–26, 2011), and the average profitability of high-frequency traders in financial markets (Coates et al. in Proc. Natl. Acad. Sci. 106:623–628, 2009). We do not find any significant correlation between 2D:4D on either bidding or profits. However, there might be racial differences in the correlation between 2D:4D and bidding and profits.  相似文献   

4.
This paper addresses the question of how uncertainty in costs and benefits affects the difficulty of reaching a voluntary agreement among sovereign states. A measure of difficulty is constructed related to side-payments necessary to make an agreement a Pareto-improving move. Using a simple model, it is shown that uncertainty actually makes agreement easier.JEL classifications: Q5, H4, D7, D8An earlier version of this paper was presented at the Conference on Risk and Uncertainty in Environmental and Resource Economics, Wageningen, The Netherlands, June 2002.  相似文献   

5.
We use numerical methods to compute Nash equilibrium (NE) bid functions for four agents bidding in a first-price auction. Each bidderi is randomly assigned:r i [0,r max], where 1 –r i is the Arrow-Pratt measure of constant relative risk aversion. Eachr i is independently drawn from the cumulative distribution function (·), a beta distribution on [0,r max]. For various values of the maximum propensity to seek risk,r max, the expected value of any bidder's risk characteristic,E (r i ), and the probability that any bidder is risk seeking,P (r i > 1), we determine the nonlinear characteristics of the (NE) bid functions.  相似文献   

6.
This study extends the one period zero-VaR (Value-at-Risk) hedge ratio proposed by Hung et al . (2005 Hung, JC, Chiu, CL and Lee, MC. 2005. Hedging with zero-Value at Risk hedge ratio. Applied Financial Economics, 16: 25969.  [Google Scholar]) to the multi-period case and incorporates the hedging horizon into the objective function under VaR framework. The multi-period zero-VaR hedge ratio has several advantages. First, compared to existing hedge ratios based on downside risk, it has an analytical solution and is simple to calculate. Second, compared to the traditional Minimum Variance (MV) hedge ratio, it considers expected return and remains optimal while the Martingale process is invalid. Thirdly, hedgers may elect an adequate hedging horizon and confidence level to reflect their level of risk aversion using the concept of VaR. Pondering the occurrence of volatility clustering and price jumps, this study utilizes the ARJI model to compute time-varying hedge ratios. Finally, both in-sample and out-of-sample hedging effectiveness between one-period hedge ratio and multi-period hedge ratio are evaluated for four hedging horizons and various levels of risk aversion. The empirical results indicate that hedgers wishing to hedge downside risk over long horizons should use the multi-period zero-VaR hedge ratios.  相似文献   

7.
The Test of Financial Literacy (TFL) was created to measure the financial knowledge of high school students. Its content is based on the standards and benchmarks stated in the National Standards for Financial Literacy (Council for Economic Education 2013 Council for Economic Education (CEE). 2013. National standards in financial literacy. New York: CEE. [Google Scholar]). The test development process involved extensive item writing and review. Test data collected from 1,218 high schools to evaluate the measure indicate that the overall test is reliable and valid, and test items contribute to the effectiveness of the instrument. Further test analysis was conducted using an item response theory (IRT) model with four parameters to estimate item discrimination, item difficulty, guessing, and inattention. The IRT results indicate that the measure is effective in assessing student financial literacy across a broad range of student abilities.  相似文献   

8.
The concept of a middle class is prevalent in both common parlance and the social sciences; concern is frequently expressed that the middle class is shrinking, and politicians often position themselves as champions of the middle class. Yet the phrase “middle class” is extremely ambiguous; no consensus exists on either the upper bound or the lower bound separating the middle class from other classes. The present paper employs the government’s official poverty line as the demarcation between the poor and the middle class, and develops an equivalent distinction to separate the middle class from the wealthy. Based on the new definition, the paper provides some rough empirical estimates of the size of the American middle class over the 1989–2004 period.
Joseph G. EisenhauerEmail:

Joseph G. Eisenhauer   is Professor and Chair of Economics at Wright State University. A past president and Distinguished Fellow of the New York State Economics Association, he has also been a Huebner Fellow at the University of Pennsylvania’s Wharton School, a visiting scholar at the Catholic University of America, and a visiting professor at the University of Rome. His research focuses on risk aversion, precautionary saving, insurance, ethics, and social class. He has been published in numerous professional journals, including Review of Social Economy, Journal of Socio-Economics, International Journal of Social Economics, Review of Political Economy, Eastern Economic Journal, Journal of Risk and Insurance, Journal of Insurance Issues, Applied Economics, Empirical Economics, International Journal of Health Care Finance and Economics, and Economics Bulletin, among others.  相似文献   

9.
Summary The paper by C. Ma [1] contains several errors. First, statement and proof of Theorem 2.1 on the existence of intertemporal recursive utility function as a unique solution to the Koopmans equation must be amended. Several additional technical conditions concerning the consumption domain, measurability of certainty equivalent and utility process need to be assumed for the validity of the theorem. Second, the assumptions for Theorem 3.1 need to be amended to include the Feller's condition that, for any bounded continuous functionf C(S × n +), (f(St+1, )¦st =s) is bounded and continuous in (s, ). In addition, for Theorem 3.1, the pricep, the endowmente and the dividend rate as functions of the state variables S are assumed to be continuous.The Feller's condition for Theorem 3.1 is to ensure the value function to be well-defined. This condition needs to be assumed even for the expected additive utility functions (See Lucas [2]). It is noticed that, under this condition, the right hand side of equation (3.5) in [1] defines a bounded continuous function ins and. The proof of Theorem 3.1 remains valid with this remark in place.A correct version of Theorem 2.1 in [1] is stated and proved in this corrigendum. Ozaki and Streufert [3] is the first to cast doubt on the validity of this theorem. They point out correctly that additional conditions to ensure the measurability of the utility process need to be assumed. This condition is identified as conditionCE 4 below. In addition, I notice that, the consumption space is not suitably defined in [1], especially when a unbounded consumption set is assumed. In contrast to what claimed in [3], I show that the uniformly bounded consumption setX and stationary information structure are not necessary for the validity of Theorem 2.1.I would like to thank Hiroyuki Ozaki and Peter Streufert for pointing out correctly some mistakes made in the original article. Comments and suggestions from an anonymous referee are gratefully appreciated. Financially support from SSHRC of Canada is acknowledged.  相似文献   

10.
Summary In this paper we consider Anonymous Sequential Games with Aggregate Uncertainty. We prove existence of equilibrium when there is a general state space representing aggregate uncertainty. When the economy is stationary and the underlying process governing aggregate uncertainty Markov, we provide Markov representations of the equilibria.Table of notation Agents' characteristics space ( ) - A Action space of each agent (aA) - Y Y = x A - Aggregate distribution on agents' characteristics - (X) Space of probability measures onX - C(X) Space of continuous functions onX - X Family of Borel sets ofX - State space of aggregate uncertainty ( ) - x t=1 aggregate uncertainty for the infinite game - = (1,2,...,t,...) - t t (1, 2,..., t) - L1(t,C ×A),v t Normed space of measurable functions from t toC( x A) - 8o(t,( x A)) Space of measurable functions from tto( x A) - Xt Xt= x s=1 t X - X t Borel field onX t - v Distribution on - vt Marginal distribution of v on t - v(t)((¦t)) Conditional distribution on given t - vt(s)(vts)) Conditional distribution on t given s (wheres) - t Periodt distributional strategy - Distributional strategy for all periods =(1,2,...,t,...) - t Transition process for agents' types - ( t,t,y)(P t+1(, t , t ,y)) Transition function associated with t - u t Utility function - V t (, a, , t) Value function for each collection (, a, , t ) - W t (, , t ) Value function given optimal action a - C() Consistency correspondence. Distributions consistent with and characteristics transition functions - B() Best response correspondence (which also satisfy consistency) - E Set of equilibrium distributional strategies - x t=1 ( t , (x A)) - S Expanded state space for Markov construction - (, a, ) Value function for Markov construction - P( t * , t y)(P(, t * , t , y )) Invariant characteristics transition function for Markov game We wish to acknowledge very helpful conversations with C. d'Aspremont, B. Lipman, A. McLennan and J-F. Mertens. The financial support of the SSHRCC and the ARC at Queen's University is gratefully acknowledged. This paper was begun while the first author visited CORE. The financial support of CORE and the excellent research environment is gratefully acknowledged. The usual disclaimer applies.  相似文献   

11.
Summary We show that a Dedekind complete Riesz space which contains a weak unite and admits a strictly positive order continuous linear functional can be represented as a subspace of the spaceL 1 of integrable functions on a probability measure space in such a way that the order ideal generated bye is carried ontoL t8. As a consequence, we obtain a characterization of abstractM-spaces that are isomorphic to concreteL -spaces. Although these results are implicit in the literature on representation of Riesz spaces, they are not available in this form. This research is motivated by, and has applications in, general equilibrium theory in infinite dimensional spaces.We thank Robert Anderson and Neil Gretsky for several useful conversations. The third author also gratefully acknowledges financial support from the Deutscheforschungsgemeinschaft, Gottfried Wilhelm Leibniz Forder preis, the National Science Foundation, and the UCLA Academic Senate Committee on Research.  相似文献   

12.
Emissions resulting from the production process can be characterized as use of the elimination and disposal services of the ecological system. Hence, they are use of natural resources and thus an input to production. The present paper discusses an approach to evaluate the returns of these kind of services as a production factor.First, four main types of industrial emission are chosen —SO 2,CO 2,NO xandparticulate matter — and integrated in a Cobb-Douglas production function. With this approach, the production elasticities and the marginal product of these types of emission can be estimated.Based on these results and assuming that marginal product equals price, the demand curve for emission is estimated. With this demand curve the consequences of different kinds of environmental policy are considered. Under further assumptions of optimal behaviour it can be shown that the demand curve for emission is equal to the curve of marginal costs of avoidance (MCA). Thus, the estimates of the demand curves can be considered as estimates of the MCA-curves. Furthermore the price elasticities of these four types of emission are estimated with this approach. The method used in the paper is suggested for calibration of CGE models.  相似文献   

13.
Summary Assume thatL is a topological vector lattice andY is a closed subset ofL + ×R N, whereR N denotes theN-dimensional Euclidean space. It is shown that the setY–L + ×R + N is closed ifY has appropriate monotonicity properties. The result is applicable to the case ofL equal toL with the Mackey topology, (L ,L 1).  相似文献   

14.
Summary In order to avoid missing the wood for trees a brief summary of the results having been obtained above appears to be appropriate.Firstly, we regard it as a result in itself of the present paper to have presented a two-sector model encapsulating a Kaleckian (and Kaldorian) vision of a capitalist economy, a model in which supply conditions of primary products take up a prominent position.Secondly, it belongs to the main results of the paper that it has laid bare, through the model presented, an astoundingly simple pattern in the way economic key-concepts such as activity, employment and distributive shares are affected by on one hand the demand side (which has so far captured an excessive amount of attention in macroeconomic modelling) and on the other hand the largely neglected supply side of the economy.Thirdly, by means of an arbitrary but not implausible numerical example we have attempted to indicate how changes in activity and distributive shares caused by exogeneous changes on the demand and supply side of the economy, respectively, are in themselves crucially dependent on (the assumptions concerning) the supply elasticity of primary products.The notation applied will be as follows C consumption - C 0 autonomous element of the consumption function - I investment (gross) - A autonomous expenditure - S savings (gross) - U stock of the primary product - Q real output (not necessarily real income) (gross) - Y income (gross) - W wage bill - L employment - w money wage rate - p price level - mark-up factor - level parameter of the production function pertaining to the primary sector - a labour-input coefficient of the industrial sector - b raw-material input coefficient of the industrial sector - s w marginal propensity to save out of wages - s marginal propensity to save out of profits - s (weighted) average ofS w ands - (unit) raw material costs as a proportion of total (unit) prime costs - share of wages in total income - E y, x partial or total elasticity ofY with respect toX. I am most grateful to Søren Gammelgård, Peter Guldager, Erik Strøjer Madsen, Jørgen Ulff-Møller Nielsen, Kurt Pedersen and an anonymous referee for their valuable suggestions and helpful comments on an earlier draft of this paper.  相似文献   

15.
We study the rates at which transaction prices aggregate information in common value auctions under the different information structures in Wilson (Rev. Econ. Stud. 44 (1977) 511) and Pesendorfer and Swinkels (Econometrica 65 (1997) 1247). We consider uniform-price auctions in which k identical objects of unknown value are auctioned to n bidders, where both n and k are allowed to diverge to infinity, and k/n converges to a number in [0,1). The Wilson assumptions lead to information aggregation at a rate proportional to , but the price aggregates information at a rate proportional to in the PS setting. We also consider English auctions, and investigate whether the extra information revealed in equilibrium improves convergence rates in these auctions.  相似文献   

16.
This paper suggests a theory of choice among strategic situations when the rules of play are not properly specified. We take the view that a “strategic situation” is adequately described by a TU game since it specifies what is feasible for each coalition but is silent on the procedures that are used to allocate the surplus. We model the choice problem facing a decision maker (DM) as having to choose from finitely many “actions”. The known “consequence” of the ith action is a coalition from game f i over a fixed set of players \(N_i\cup\{d\}\) (where d stands for the DM). Axioms are imposed on her choice as the list of consequences (f 1,..., f m ) from the m actions varies. We characterize choice rules that are based on marginal contributions of the DM in general and on the Shapley Value in particular.  相似文献   

17.
Corruption is said to be characterized by persistence. This conclusion is derived from the theoretical literature, although little empirical evidence exists to support it. Using corruption ratings data from the Political Risk Services Group, International Country Risk Guide on 110 countries from 1984 through 2006, I seek to determine whether or not corruption has actually exhibited persistence over this period. The Markov Transition Chain Matrices were used in the empirical analysis. The calculations show that corruption does persist in more than half of the sample. Next I focus on two regions: Sub-Saharan Africa, the Middle East, and North Africa. The analysis shows these regions to be characterized by persistent corruption.
Nicole BissessarEmail:
  相似文献   

18.
Conclusions A particular aspect of the present paper is the introduction of specific policy measures for the government, whose behavior on the goods market was described in earlier work as purely exogenous, like in Malinvaud. In our context, the government appears as an active economic agent, acting at absorbing any excess supply or reducing any excess demand on the goods market. Though this behavior may look somewhat arbitrary, it has the advantage to force the state of the economy towards a SME if combined with natural endogenous behavior of the other agents! Furthermore, it does not contradict observed policies through which governments stimulate or restrain economic activity via purchases or fiscal and monetary policies. Perhaps alternative policies, like direct actions on the labor market by supplying (non-productive) jobs or unemployment compensations, could have done as well: this remains an open door for further research. The preceding feature also contrasts with the recent work done by V. Böhm (1978) in a macroeconomic set up. In this paper, he studies the stability of stationary Keynesian unemployment or stationary repressed inflation states but without imposing a particular policy on the government. Comparing his work with ours, it is easily verified that if the government would keep its consumption at the levelg *, the SME would be stable if the economy starts out in Keynesian unemployment and unstable if the economy starts out in repressed inflation, confirming Böhm's result.Our analysis is related to an earlier work of Archibald and Lipsey (1958), dealing with the adjustment of the economy to a Stationary Equilibrium after a change in real balances. The Quantity Theory of Money postulates that along a SME, a change in the price and the wage rate from (p,w) to (p,w) leads to an adjustment in the level of stationary money holdings from mi * to mi *, mi *=i * (p,w). In this paper, Archibald and Lipsey suggest that the economy follows a sequence of temporary market equilibria: Starting from a change in real balances,prices adjust at each period through a tâtonnement process so as to match supply and demand. Our paper proposes an alternative path: At each period,quantities adjust through a tâtonnement process at constant prices.This paper is a revised version of CORE Discussion Paper 7701. We wish to thank Paul Champsaur, Jacques Drèze, Werner Hildenbrand and Reinhard John for stimulating discussions. We are grateful to Volker Böhm for valuable comments and criticisms.Research supported by the Fonds National Belge de la Recherche Scientifique.  相似文献   

19.
The buildup of so-called greenhouse gases in the atmosphere — CO2 in particular-appears to be having an adverse impact on the global climate. This paper briefly reviews current expectations with regard to physical and biological effects, their potential costs to society, and likely costs of abatement. For a worst case scenario it is impossible to assess, in economic terms, the full range of possible non-linear synergistic effects. In the most favorable (although not necessarily likely) case (of slow-paced climate change), however, it seems likely that the impacts are within the affordable range, at least in the industrialized countries of the world. In the third world the notion of affordability is of doubtful relevance, making the problem of quantitative evaluation almost impossible. We tentatively assess the lower limit of quantifiable climate-induced damages at $30 to $35 per ton of CO2 equivalent, worldwide, with the major damages being concentrated in regions most adversely affected by sea-level rise. The non-quantifiable environmental damages are also significant and should by no means be disregarded.The costs and benefits of (1) reducing CFC use and (2) reducing fossil fuel consumption, as a means of abatement, are considered in some detail. This strategy has remarkably high indirect benefits in terms of reduced air pollution damage and even direct cost savings to consumers. The indirect benefits of reduced air pollution and its associated health and environmental effects from fossil-fuel combustion in the industrialized countries range from $20 to $60 per ton of CO2 eliminated. In addition, there is good evidence that modest (e.g. 25%) reductions in CO2 emissions may be achievable by the U.S. (and, by implication, for other countries) by a combination of increased energy efficiency and restructuring that would permit simultaneous direct economic benefits (savings) to energy consumers of the order of $50 per ton of CO2 saved. A higher level of overall emissions reduction — possibly approaching 50% — could probably be achieved, at little or not net cost, by taking advantage of these savings.We suggest the use of taxes on fossil fuel extraction (or a carbon tax) as a reasonable way of inducing the structural changes that would be required to achieve significant reduction in energy use and CO2 emissions. To minimize the economic burden (and create a political constituency in support of the approach) we suggest the substitution of resource-based taxes in general for other types of taxes (on labor, income, real estate, or trade) that are now the main sources of government revenue. While it is conceded that it would be difficult to calculate the optimal tax on extractive resources, we do not think this is a necessary prerequisite to policy-making. In fact, we note that the existing tax system has never been optimized according to theoretical principles, and is far from optimal by any reasonable criteria.During the academic year 1989–90 Dr. Ayres was at the International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria.During the summer of 1989 Mr. Walter was a member of the Young Scientists' Summer Program at IIASA.  相似文献   

20.
The reversibility of sequential economic choices concerning production and consumption is addressed. A geometric approach to substitution effects and output/income effects is set forth in terms of vector fields on bundle space. By means of suitable fixing relations the 0-homogeneity of such problems can be circumvented, so as to define global parametrizations of effects, for which Lie brackets measure the departure from commutativity. A couple of propositions are established, assessing the benchmark relevance of homothetic models. Application to Farrell decompositions, as tailored by Bogetoft et al. (Eur J Oper Res 168:450–462, 2006), results in complete agreement with the results found by such Authors. The theoretical relevance of the approach is thoroughly discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号