首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider a simple two-state risk with equal probabilities for the two states. In particular, assume that the random wealth variable dominates via ith-order stochastic dominance for i=M,N. We show that the 50-50 lottery dominates the lottery via (N+M)th-order stochastic dominance. The basic idea is that a decision maker exhibiting (N+M)th-order stochastic dominance preference will allocate the state-contingent lotteries in such a way as not to group the two “bad” lotteries in the same state, where “bad” is defined via ith-order stochastic dominance. In this way, we can extend and generalize existing results about risk attitudes. This lottery preference includes behavior exhibiting higher-order risk effects, such as precautionary effects and tempering effects.  相似文献   

2.
Traditional economic analyses of the peak-load problem typically assume an unrealistic degree of regularity in demand during well-defined peak and off-peak periods. This issue is addressed through a comprehensive statistical model that separates demand into its systematic and stochastic components. This model is combined with a traditional economic model and applied to local telephone service, leading to substantive conclusions relevant for managerial decisions as well as further research, among them:
  • ? Neglecting the systematicand stochastic structure of demand may lead to inefficient tariffs. Efficient measured service structures typically price individual callsbelow incremental capacity cost.
  • ? Industry wide capacity decision rules that are exclusively driven by blockage probability targets during narrowly defined time periods may be economically inefficient.
  • ? For telephone service, spot pricing, which sets high prices during periods ofactual congestion, has the potential to be considerably more efficient than traditional tariffs that set high prices during periods ofexpected congestion.
  •   相似文献   

    3.
    This paper proposes two (ordinal and cardinal) generalizations of [J.C. Harsanyi, R. Selten, A General Theory of Equilibrium Selection in Games, MIT Press, Cambridge, MA and London, 1988] risk-dominance to multi-player, multi-action games. There are three reasons why generalized risk-dominance (GR-dominance) is interesting. Extending the logic of risk-dominance, GR-dominant actions can be interpreted as best responses to conjectures that satisfy a certain type of symmetry. Second, in a local interaction game of [G. Ellison, Learning, local interaction, and coordination, Econometrica 61 (5) (1993) 1047], if an action is risk-dominant in individual binary interactions with neighbors, it is also GR-dominant in the large game on a network. Finally, we show that GR-dominant actions are stochastically stable under a class of evolutionary dynamics. The last observation is a corollary to new abstract selection results that applies to a wide class of so-called asymmetric dynamics. In particular, I show that a (strictly) ordinal GR-dominant profile is (uniquely) stochastically stable under the approximate best-response dynamics of [M. Kandori, G.J. Mailath, R. Rob, Learning, mutation, and long run equilibria in games, Econometrica 61 (1) (1993) 29]. A (strictly) cardinal GR-dominant equilibrium is (uniquely) stochastically stable under a class of payoff-based dynamics that includes [L.E. Blume, The statistical-mechanics of strategic interaction, Games Econ. Behav. 5 (3) (1993) 387-424]. Among others, this leads to a generalization of a result from [G. Ellison, Basins of attraction, long-run stochastic stability, and the speed of step-by-step evolution, Rev. Econ. Stud. 67 (230) (2000) 17] on the -dominant evolutionary selection to all networks and the unique selection to all networks that satisfy a simple, sufficient condition.  相似文献   

    4.
    This paper assesses the stochastic convergence of relative \(\hbox {CO}_{2}\) emissions within 28 OECD countries over the period 1950–2013. Using the local Whittle estimator and some of its variants we assess whether relative per capita \(\hbox {CO}_{2}\) emissions are long memory processes which, although highly persistent, may revert to their mean/trend in the long run thereby indicating evidence of stochastic convergence. Furthermore, we test whether (possibly) slow convergence or the complete lack of it may be the result of structural changes to the deterministics of each of the relative per-capita emissions series by means of the tests of Berkes et al. (Ann Stat 1140–1165, 2006) and Mayoral (Oxford Bull Econ Stat 74(2):278–305, 2012). Our results show relatively weak support for stochastic convergence of \(\hbox {CO}_2\) emissions, indicating that only between 30 and 40% of the countries converge to the OECD average in a stochastic sense. This weak evidence disappears if we enlarge the sample to include 4 out of the 5 BRICS, indicating that our results are not robust to the inclusion of countries which are experiencing rates of growth which are far larger than those of the OECD members. Our results also decisively indicate that a slow or lack of convergence is not the results of a structural break in the relative \(\hbox {CO}_{2}\) emissions series.  相似文献   

    5.
    6.
    This paper defines the rate of substitution of one stochastic change to a random variable for another. It then focuses on the case where one of these changes is an nth degree risk increase, and the other is an m  th degree risk increase, where n>m?1n>m?1. The paper shows that the rate of substitution for these two risk increases can be used to provide a broader definition and two additional characterizations of the nth degree Ross more risk averse partial order. The implications for local intensity measures of nth degree risk aversion are also examined. The analysis organizes the existing results as well as generates new ones.  相似文献   

    7.
    The paper examines the processes underlying economic fluctuations by investigating the volatility moderation of U.S. economy in the early 1980s. We decompose the volatility decline using a dynamic factor framework into a common stochastic trend, common transitory component and idiosyncratic components. We find that the moderation of business cycle was a result of the moderation in transitory and idiosyncratic components. Our results suggest that important part of stochastic process that drives economy is transitory. The paper investigates the role of oil prices, monetary and financial market factors. Proposed economic factors do not have a significant relationship to either transitory or permanent components. In addition, we find that transitory shocks are as common during the 1980s and 1990s as they were during the 1960s and 1970s.
    Stanislav Radchenko (Corresponding author)Email:
      相似文献   

    8.
    In time series context, estimation and testing issues with autoregressive and moving average (ARMA) models are well understood. Similar issues in the context of spatial ARMA models for the disturbance of the regression, however, remain largely unexplored. In this paper, we discuss the problems of testing no spatial dependence in the disturbances against the alternative of spatial ARMA process incorporating the possible presence of spatial dependence in the dependent variable. The problems of conducting such a test are twofold. First, under the null hypothesis, the nuisance parameter is not identified, resulting in a singular information matrix (IM), which is a nonregular case in statistical inference. To take account of singular IM, we follow Davies (Biometrika 64(2):247–254, 1977; Biometrika 74(1):33–43, 1987) and propose a test procedure based on the supremum of the Rao score test statistic. Second, the possible presence of spatial lag dependence will have adverse effect on the performance of the test. Using the general test procedure of Bera and Yoon (Econom Theory 9:649–658, 1993) under local misspecification, we avoid the explicit estimation of the spatial autoregressive parameter. Thus our suggested tests are entirely based on ordinary least squares estimation. Tests suggested here can be viewed as a generalization of Anselin et al. (Reg Sci Urban Econ 26:77–104, 1996). We conduct Monte Carlo simulations to investigate the finite sample properties of the proposed tests. Simulation results show that our tests have good finite sample properties both in terms of size and power, compared to other tests in the literature. We also illustrate the applications of our tests through several data sets.  相似文献   

    9.
    Summary. This paper reexamines the condition (1 + n), which Zilcha (1991) presents as a necessary and sufficient condition for dynamic inefficiency of stationary allocations in overlapping generation models with stochastic production. We show that this condition is necessary but not sufficient for a stationary allocation to be dynamically inefficient by Zilchas definition. We also show that there is a narrow but widely studied class of specifications in which the Zilcha test is both necessary and sufficient for dynamic inefficiency of stationary competitive equilibrium allocations. Outside this class, however, counterexamples can be constructed relatively easily.Received: 30 September 2002, Revised: 13 August 2004, JEL Classification Numbers: D51, D90, E13, E22. Correspondence to: Steven RussellWe thank Jon Burke, Subir Chakrabarti, Itzhak Zilcha and an anonymous referee for helpful conversations and/or comments.  相似文献   

    10.
    This paper extends the work on habit formation of Pollak [Habit formation and dynamic demand functions. J. Polit. Econ. (1970)] and provides a critical counterexample to a conjecture of von Weizsäcker [Notes on endogenous changes of tastes, J. Econ. Theory (1971)] concerning the existence of a “long-run utility function.“ A linear specification of habit formation is applied to a general system of demand functions with linear Engel curves. It is shown that there exists a utility function which rationalizes the long-run demand functions if and only if they are the steady-state solution to a system of short-run demand functions generated by an additive utility function.  相似文献   

    11.
    In this paper we consider the standard voting model with a finite set of alternatives A and n voters and address the following question: what are the characteristics of domains \({\mathcal D}\) that induce the property that every strategy-proof social choice function \({f: {\mathcal D}^n \rightarrow A}\) satisfying unanimity, has the tops-only property? We first impose a minimal richness condition which ensures that for every alternative a, there exists an admissible ordering where a is maximal. We identify conditions on \({\mathcal D}\) that are sufficient for strategy-proofness and unanimity to imply tops onlyness in the general case of n voters and in the special case, n = 2. We provide an algorithm for constructing tops-only domains from connected graphs with elements of A as nodes. We provide several applications of our results. Finally, we relax the minimal richness assumption and partially extend our results.  相似文献   

    12.
    This paper aims to test whether a given type of process innovation, namely flexible production technologies (FPTs), contributes to increased firm efficiency. Using one-year firm data from the Portuguese manufacturing industry and applying a parametric stochastic frontier approach, individual technical efficiencies are obtained and their determinants simultaneously estimated, using a single-step procedure recently proposed by Battese and Coelli (1995 Battese, G and Coelli, T. 1995. A model for technical inefficiency effects in a stochastic frontier production function and panel data. Empirical Economics, 20: 32532.  ). The results support the hypothesis that technological flexibility, measured through the use of FPTs, is important in explaining differences in efficiency. Furthermore, given the specifications of the stochastic frontier function, the null hypothesis that Portuguese firms are fully technically efficient is rejected.  相似文献   

    13.
    The concept of ergodicity in economics seems to have the qualities of a shibboleth—a word or saying used by adherents of a party, sect, or belief, and usually regarded by others as empty of real meaning. It is in use by both neoclassical economics—after Samuelson (1965 Samuelson, P. A. “Proof That Properly Anticipated Prices Fluctuate Randomly.” Industrial Management Review, 1965, 6 (2), 4149.[Web of Science ®] [Google Scholar], p. 43), who used the term in his paper on what later became a foundation of the efficient market hypothesis—and post Keynesian economics—after Davidson, who picked up the term in order to highlight methodological differences. Considering the origin of the concept in statistical physics and its use in the topology of dynamical systems, which most economists are not conversant with, the importance ascribed to ergodicity in economic debate seems mystifying. We deconstruct the meaning of the term in the major contributions of Samuelson and Davidson. We suggest an alternative to (non)ergodicity to discuss the nature of randomness in the real world. While neoclassical theory assumes stochastic randomness, post Keynesians assume nonstochastic randomness, a term developed by the mathematician Kolmogorov (1986 Kolmogorov, A.N. “On the Logical Foundations of Probability Theory.” In K. Ito, and J.V. Prokhorov (eds.), Probability and Mathematical Statistics, Moscow, 1986, pp. 467471. [Google Scholar], p. 467). We argue that even in an ergodic world there is a problem with the idea that stochastic randomness can be dealt with by the financial system.  相似文献   

    14.
    It is argued if xt ~ I(1) and yt ~ I(1), then running a regression xt on yt would produce spurious results because e t would generally be I(1). However, there may exist a ‘b’ such that e t  = x t - by t is I(0), then running a regression x t on y t would not produce spurious results. This special case of two integrated time series is known in the literature as cointegration. In this particular case, x t and y t are said to be cointegrated. In our review of the development of the concept of cointegration, we identified that the underlying reason for this special case to arise is the proposition that if x t  ~ I(d x ), y t  ~ I(d y ), then z t  = bx t  + cy t  ~ I(max(d x ,d y )). In this research, we offer evidence against this proposition.  相似文献   

    15.
    The purpose of the paper is to examine formally the fundamental implication that technical inefficiency (TI) is related to firm exit. Traditional stochastic frontier models allow for the measurement of TI but do not allow for a direct effect of TI on exit. We propose a model which allows for such effects and consists of a stochastic frontier model, and an additional equation that describes the probability of exit as a function of covariates and TI. Since TI is unobserved, econometric complications arise, and obtaining consistent estimates is non-trivial due to the presence of integrals in the likelihood function. We propose and implement maximum likelihood estimation one step, employing data for 3,404 manufacturing firms in Greece. We find significant positive effects from TI on the probability of exit. We also propose and provide measures of TI that respect the fact that unobserved TI affects the probability of exit and compare them to TI measures from the traditional stochastic frontier model.
    Theodore A. PapadogonasEmail:
      相似文献   

    16.
    This paper estimates the technical and allocative inefficiencies of the transmission-distribution sector of Japanese electric utilities using a panel data during the 1981–1998 period. A stochastic production frontier of the CES form is jointly estimated with input demand equations. Taking advantage of the self-duality, we retrieve the cost frontier by which the impacts of technical and allocative inefficiencies on costs and input demands are measured. The estimated elasticity of substitution is significantly different from unity in favor of the CES specification over the Cobb–Douglas. The results show that observed costs are 9 to 48% higher than the efficient level; technical inefficiency raises costs by 1 to 28%, while allocative inefficiency does so by 8 to 30%. Although their impacts on costs are similar, technical inefficiency more fluctuates so the differences in the performance of utilities are mainly due to technical inefficiency. We also find a substantial over-utilization of capital for all utilities.
    Jiro NemotoEmail:
      相似文献   

    17.
    18.
    The present paper analyzes the changes in the economic constitution of the European Community since its foundation in 1958. In order to identify the various changes, we start by developing a frame of reference. Our proposition is that theconstitutional charter of the European Economic Community (EEC)—the EEC Treaty—came closest to this frame of reference, being an economic constitution for a market system, whereas the subsequentprocess of European integration—including several modifications of the Treaty—was largely based on the introduction of non-market elements. Our argument is that as far as the economic constitution is concerned, the Treaty of Maastricht is dominated by traits which are characteristic of modern welfare states.  相似文献   

    19.
    Pricing carbon is a central concern in environmental economics, due to the worldwide importance of emissions trading schemes to regulate pollution. This paper documents the presence of small and large jumps in the stochastic process of the CO $_2$ futures price. The large jumps have a discrete origin, i.e. they can arise from various demand factors or institutional decisions on the tradable permits market. Contrary to the existing literature, we show that the stochastic process of carbon futures prices does not contain a continuous component (Brownian motion). The results are derived by using high-frequency data in the activity signature function framework (Todorov and Tauchen in J Econom 154:125–138, 2010; Todorov and Tauchen in J Bus Econ Stat 29:356–371, 2011). The implication is that the carbon futures price should be modeled as an appropriately sampled, centered Lévy or Poisson process. The pure-jump behavior of the carbon price might be explained by the lower volume of trades on this allowance market (compared to other highly liquid financial markets).  相似文献   

    20.
    Subsidy-free VCG mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of budget surplus to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss satisfies . If is bounded away from , converges to zero exponentially in n.Participation is voluntary in the optimal mechanism achieving if p=1, but not if p?2. Among voluntary mechanisms, the optimal efficiency loss is not significantly larger than if . But it does not converge to zero in n if .  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号