首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An “investment bubble” is a period of “excessive, and predictably unprofitable, investment” (DeMarzo et al. in J Financ Econ 85:737–754, 2007). Such bubbles most often accompany the arrival of some new technology, such as the tech stock boom and bust of the late 1990s and early 2000s. We provide a rational explanation for investment bubbles based on the dynamics of learning in highly uncertain environments. Objective information about the earnings potential of a new technology gives rise to a set of priors or a belief function. A generalised form of Bayes’ rule is used to update this set of priors using earnings data from the new economy. In each period, agents—who are heterogeneous in their tolerance for ambiguity—make optimal occupational choices, with wages in the new economy set to clear the labour market. A preponderance of bad news about the new technology may nevertheless give rise to increasing firm formation around this technology, at least initially. To a frequentist outside observer, the pattern of adoption appears as an investment bubble.  相似文献   

2.
Chiles hydroelectric industry was privatized in 1985, but required to operate within a regulatory framework designed to achieve a competitive outcome. A centralized dispatch center was established to ensure production at minimum cost, subject to constraints on minimum release and minimum reservoir stock. A reluctance to rapidly reduce the industry work force may also have existed. We develop a constrained cost-minimization model for thermal and hydro generation to obtain the shadow price of water and to determine the qualitative effect of these constraints on allocative efficiency. Using panel data from 1986–1997, we assess the economic efficiency of the hydro industry by estimating a stochastic distance frontier and price equations from the dual cost-minimization problem. We find dramatic increases in technical change and productivity change, with positive efficiency change for all years but the last. We also observe a dramatic decline in allocative inefficiencies over our sample period. The share of hydro generation from run-of-river and thermal plants relative to reservoir plants has increased, presumably in reaction to the water release and reservoir stock constraints, reducing the relative over-utilization of capital to water from the pre-1985 regime. Further, the over-utilization of labor to capital and water has fallen over time. However, considerable allocative inefficiencies remain, consistent with our finding of industry-wide scale economies. Substantial cost savings would result if technical and allocative efficiency were eliminated.JEL Classification: L94, D24  相似文献   

3.
In 2007 Nicholas Stern’s Review (in Science 317:201–202, 2007) estimated that global GDP would shrink by 5–20% due to climate change which brought forth calls to reduce emissions by 30–70% in the next 20 years. Stern’s results were contested by Weitzman (in J Econ Lit XLV(3):703–724, 2007) who argued for more modest reductions in the near term, and Nordhaus (in Science 317:201–202, 2007) who questioned the low discount rate and coefficient of relative risk aversion employed in the Stern Review, which caused him to argue that ‘the central question about global-warming policy—how much how, how fast, and how costly—remain open.’ We present a simulation model developed by Färe et al. (in Time substitution with application to data envelopment analysis, 2009) on intertemporal resource allocation that allows us to shine some light on these questions. The empirical specification here constrains the amount of undesirable output a country can produce over a given period by choosing the magnitude and timing of those reductions. We examine the production technology of 28 OECD countries over 1992–2006, in which countries produce real GDP and CO2 using capital and labor and simulate the magnitude and timing necessary to be in compliance with the Kyoto Protocol. This tells us ‘how fast’ and ‘how much’. Comparison of observed GDP and simulated GDP with the emissions constraints tells us ‘how costly’. We find these costs to be relatively low if countries are allowed reallocate production decision across time, and that emissions should be cut gradually at the beginning of the period, with larger cuts starting in 2000.  相似文献   

4.
Determining the profit maximizing input–output bundle of a firm requires data on prices. This paper shows how endogenously determined shadow prices can be used in place of actual prices to obtain the optimal input–output bundle where the firm’s shadow profit is maximized. This approach amounts to an application of the Weak Axiom of Profit Maximization (WAPM) formulated by Varian [(1984) The Non-parametric approach to production analysis. Econometrica 52:3 (May) 579–597] based on shadow prices rather than actual prices. At these shadow prices, the shadow profit of a firm is zero. The maximum shadow profit that could have been attained at some other input–output bundle is shown to be a measure of the inefficiency of the firm. Because the benchmark input–output bundle is always an observed bundle from the data, it can be determined without having to solve any elaborate programming problem.
Subhash C. RayEmail:
  相似文献   

5.
Asymptotics for panel quantile regression models with individual effects   总被引:1,自引:0,他引:1  
This paper studies panel quantile regression models with individual fixed effects. We formally establish sufficient conditions for consistency and asymptotic normality of the quantile regression estimator when the number of individuals, nn, and the number of time periods, TT, jointly go to infinity. The estimator is shown to be consistent under similar conditions to those found in the nonlinear panel data literature. Nevertheless, due to the non-smoothness of the objective function, we had to impose a more restrictive condition on TT to prove asymptotic normality than that usually found in the literature. The finite sample performance of the estimator is evaluated by Monte Carlo simulations.  相似文献   

6.
Conclusions The shadow prices associated with resource and policy constraints can be used to decentralize project selection among project managers so that their choices accord with these constraints and with the stipulated objective function. Project managers are instructed to value their inputs and outputs (suitably redefined so as to include contributions to social goals) at their shadow values, using as discount rate the tradeoff (minus 1) between consumption in year 1 and in year 0. They will choose the optimal set of independent projects if they adopt all those with positive NPVs and, among mutually exclusive projects with positive NPVs, if they select that with the highest NPV. Project managers will be indifferent toward projects whose NPVs are zero, and will require some additional directives if they are to adopt fractional projects to the extent prescribed in the optimum solution. If a new project were to become available for evaluation at this point, the same set of shadow prices can be used to determine its NPV. Only if this NPV were positive and the project sufficiently large would there be a need to recalculate the set of shadow prices.19 As was pointed out above, the dual variables corresponding to the primal constraints indicate the tradeoffs of resource amounts or target levels against the maximand which policymakers implicitly make when they select their values. The magnitudes of these tradeoffs may then lead them to revise the constrained target levels upwards or downwards until they reach more appropriate levels. Several iterations may be needed before a satisfactory or best-compromise solution is reached.20 Instead of the constraint method outlined above, other multiobjective programming models can be used to generate as many noninferior or nondominated solutions in objective space as desired. In the weighting method first outlined by Zadeh (1963), for example, all objectives are included in the objective function of the primal program and are assigned arbitrary positive weights. After solving the corresponding linear program, these weights are systematically varied and trace out solutions (all of which are noninferior) in objective space. Once the policymaker has chosen the best of these, the weights which have given rise to it can be used as tradeoffs for the objectives under consideration.21 The superiority of these multiobjective methods over the UNIDO bottom-up approach resides in the fact that policymakers make their best-compromise choices in objective space rather than decision space. That is, they choose the most desired combination of objectives, and only as an implication of this are projects accepted or rejected. In the bottom-up approach, instead, the policymakers are assumed to choose directly in decision space, which includes the primal variables Pi, a procedure which appears to place the proverbial cart before the horse.One of the strengths of programming models is their versatility. Models of the type discussed above can readily be adapted to allow for such factors as a greater number of time periods, the possible reinvestment of project cash flows, the assigning of distributional weights to different economic agents, the introduction of foreign trade and investment, and limitations on the availability of specialized factors. Mutual exclusiveness within broad classes of projects can also be introduced, as can other forms of project interdependence such as one-way dependence (i.e, the investment in project i being contingent on project j being also adopted), and complementarity (the simultaneous adoption of projects i and j has different net output effects than the sum of their adoption as single projects). In cases where project indivisibility is an important consideration, integer programming can be utilized. Scale economies can also be incorporated in a programming framework (see Westphal, 1975, and Hendrick and Stoutjesdijk, 1978).This enthusiasm for programming models should be tempered by certain limitations they possess in addition to their indubitable advantages. First, in multiperiod programming models, some of the shadow prices have been found to be unstable in the sense of fluctuating sizeably from one period to the next, a feature which fails to inspire confidence in their utilization for project implementation.22 Secondly, programming models have sometimes been carelessly specified, for example by failing to bring out the implications of additional labour employment for a labour-surplus economy's consumption-investment mix, and thus attributing to labour a shadow price of zero when it is in fact positive. This would lead to the choice of unduly labour-intensive projects.23 Thirdly, our approach assumes that a sizeable number of project proposals are ready to be evaluated at one point in time, as opposed to the UNIDO assumption that projects are considered sequentially for possible adoption. Admittedly, it is a counsel of perfection to demand of Central Planning Offices that they maintain an updated shelf full of projects ready for possible implementation. It remains, however, true that at any point in time there are limits on the amounts of financial resources and other inputs, and that deciding upon projects' merits as they roll off the drawing boards does not allow the demands on these resources to be related to the available supplies, or to the merits of alternative projects, in any systematic way. One cannot escape the conclusion that a high payoff is to be expected from devoting resources to the active recruitment of skilled project evaluators to build up a sufficiently extensive collection of potential investment projects.A fourth objection of a technical nature has recently been raised against the use of linear programming models as approximations to nonlinear programming ones. Baumol (1982) has shown that the dual prices resulting from such approximations need bear no relation to the true shadow prices yielded by the nonlinear program. They may take on zero values in cases where the inputs are in fact scarce, and positive values when the inputs are in excess supply. This point will not be pursued further here in view of the operational difficulties of estimating nonlinear functions. It should be noted, however, that a number of applications in the water-resources area have used geometric programming techniques to deal with nonlinearities.In spite of these and other objections that may be raised against the use of multiobjective programming models for project selection, I believe that they do serve to illuminate the rationale for shadow pricing in economies subject to market distortions, and that a cautious reliance on them can facilitate the actual process of project selection and implement the goals of efficiency and distributional equity which characterize both the UNIDO and the Little-Mirrlees methods of project appraisal.I wish to express my gratitude to my colleague V. Kerry Smith and to four anonymous referees whose comments have resulted in a substantial reformulation of the content and conclusions of this paper, although they are absolved of any responsibility for the final product.  相似文献   

7.
8.
We introduce the Speculative Influence Network (SIN) to decipher the causal relationships between sectors (and/or firms) during financial bubbles. The SIN is constructed in two steps. First, we develop a Hidden Markov Model (HMM) of regime-switching between a normal market phase represented by a geometric Brownian motion and a bubble regime represented by the stochastic super-exponential Sornette and Andersen (Int J Mod Phys C 13(2):171–188, 2002) bubble model. The calibration of the HMM provides the probability at each time for a given security to be in the bubble regime. Conditional on two assets being qualified in the bubble regime, we then use the transfer entropy to quantify the influence of the returns of one asset i onto another asset j, from which we introduce the adjacency matrix of the SIN among securities. We apply our technology to the Chinese stock market during the period 2005–2008, during which a normal phase was followed by a spectacular bubble ending in a massive correction. We introduce the Net Speculative Influence Intensity variable as the difference between the transfer entropies from i to j and from j to i, which is used in a series of rank ordered regressions to predict the maximum loss (%MaxLoss) endured during the crash. The sectors that influenced other sectors the most are found to have the largest losses. There is some predictability obtained by using the transfer entropy involving industrial sectors to explain the %MaxLoss of financial institutions but not vice versa. We also show that the bubble state variable calibrated on the Chinese market data corresponds well to the regimes when the market exhibits a strong price acceleration followed by clear change of price regimes. Our results suggest that SIN may contribute significant skill to the development of general linkage-based systemic risks measures and early warning metrics.  相似文献   

9.
Environmental regulation and productivity: testing the porter hypothesis   总被引:7,自引:1,他引:7  
Abstract  This paper provides an empirical analysis of the relationship between the stringency of environmental regulation and total factor productivity (TFP) growth in the Quebec manufacturing sector. This allows us to investigate more fully the Porter hypothesis in three directions. First, the dynamic aspect of the hypothesis is captured through the use of lagged regulatory variables. Second, we argue that the hypothesis is more relevant for more polluting sectors. Third, we argue that the hypothesis is more relevant for sectors which are more exposed to international competition. Our empirical results suggest that: (1) the contemporaneous impact of environmental regulation on productivity is negative; (2) the opposite result is observed with lagged regulatory variables, which is consistent with Michel Porter’s conjecture; and (3) this effect is stronger in a subgroup of industries which are more exposed to international competition.
Paul LanoieEmail:
  相似文献   

10.
11.
12.
The assumption of full proportionality is incorporated in the constant returns-to-scale (CRS) technology and allows for proportional scaling of inputs and outputs of production units. The assumption of selective proportionality was recently incorporated in the hybrid returns-to-scale (HRS) technology in which only a subset of outputs is proportional to a subset of inputs. In this paper we develop a production technology that exhibits both the full and selective proportionality at the same time. Real examples of such technology are pointed out. Subject to certain conditions, the DEA models based on this technology provide better discrimination than the CRS and HRS models.
Victor V. PodinovskiEmail:
  相似文献   

13.
This paper addresses the question of efficacy of Management Control Systems in organizations. It shows that control systems are based on a combination of three underlying approaches — markets, rules and culture — in order to obtain desired behaviours from organizations’ members. These three approaches are then discussed in terms of Hofstede's work-related values characterization. It is shown that each firm or organization defines its own balance among the three bases of control identified above. This balancing is dynamic and organizations must continuously adapt their Management Control Systems to changes in the overall culture(s), in technology and in the competitive forces. The general evolution of Management Control Systems is seen to be towards a lessening of the importance of rules-based controls and towards an increased reliance on controls imbedded in the organizational culture. [2] [2] We are grateful to Kavasseri V. Ramanathan, Richard B. Peterson, Jeremy Dent, to the participants of the EIASM ‘Accounting and Culture’ Workshop, and to Geert Hofstede for their advice, encouragement and cogent criticism. Financial Support of the Accounting Development Fund at the University of Washington is gratefully acknowledged.
  相似文献   

14.
This study measures productivity growth on Irish dairy farms over the period 1984–2000. A total factor productivity index is constructed for the dairy system and is decomposed into technical change, efficiency change, and changes in scale efficiency. This is achieved by estimating a stochastic output distance function model of the production technology in use on Irish dairy farms. Overall, productivity on Irish dairy farms grew by 1.2% per annum over the sample period.
Alan Matthews (Corresponding author)Email:
  相似文献   

15.
16.
Output scaling in a cost function setting   总被引:1,自引:1,他引:0  
A significant result in production theory is that even under a very weak condition on the technology, the cost function is well defined and linearly homogeneous in factor prices. We could term this homogeneity a scaling law, for an equiproportionate scaling of all factor prices results in an identical scaling of the value of the cost function. The purpose of this paper is to determine sets of conditions under which an analogous result may hold for the output variable in the cost function. Our analysis initially focuses on the cost function of a multioutput technology so that the output variable in the cost function is vector-valued. The results we obtain are that some output-scaling laws exist if and only if the underlying technology is ray-homothetic. In the final section we consider the meaning of the results for the single-output case.The refereeing process of this paper was handled through R. Robert Russell. An earlier version of this paper was presented at the 1989 Conference on Current Issues in Productivity, December 6–8, 1989, Rutgers University, Newark, NJ.  相似文献   

17.
It is claimed the hierarchical-age–period–cohort (HAPC) model solves the age–period–cohort (APC) identification problem. However, this is debateable; simulations show situations where the model produces incorrect results, countered by proponents of the model arguing those simulations are not relevant to real-life scenarios. This paper moves beyond questioning whether the HAPC model works, to why it produces the results it does. We argue HAPC estimates are the result not of the distinctive substantive APC processes occurring in the dataset, but are primarily an artefact of the data structure—that is, the way the data has been collected. Were the data collected differently, the results produced would be different. This is illustrated both with simulations and real data, the latter by taking a variety of samples from the National Health Interview Survey (NHIS) data used by Reither et al. (Soc Sci Med 69(10):1439–1448, 2009) in their HAPC study of obesity. When a sample based on a small range of cohorts is taken, such that the period range is much greater than the cohort range, the results produced are very different to those produced when cohort groups span a much wider range than periods, as is structurally the case with repeated cross-sectional data. The paper also addresses the latest defence of the HAPC model by its proponents (Reither et al. in Soc Sci Med 145:125–128, 2015a). The results lend further support to the view that the HAPC model is not able to accurately discern APC effects, and should be used with caution when there appear to be period or cohort near-linear trends.  相似文献   

18.
Drawing upon institutional theory we develop a conceptual model and investigate the determinants of market entry for worker cooperatives, publicly traded and limited-liability companies. Our results show that formal institutional conditions (i.e., mercantile legislation) influence the start-up choice of entrepreneurs regarding the legal form of their new venture. In addition, we take into account the influence of informal institutional conditions (i.e., local corporate culture) on the market entry rate of firms with different legal structures. Findings show that, while market entry is sensitive to the general economic climate, entry rates of firms with a different legal structure respond differently to the same economic conditions.
Ingrid VerheulEmail:
  相似文献   

19.
In this paper cultural values and regulatory barriers to start-up are presented as characteristics of the business environment which influence the international differences in the level of entrepreneurial activity. A first objective of this paper is to measure the importance of a country’s cultural values in determining the national level of entrepreneurial activity, calculated by the Total Entrepreneurial Activity rate from the Global Entrepreneurship Monitor. Culture is studied using Schwartz’s value structure (Schwartz 1994). This allows for the differentiating of seven cultural orientations that are then arranged around three bipolar dimensions: Autonomy- Embeddedness, Egalitarianism-Hierarchy and Harmony-Mastery. The paper also studies the effect of regulatory barriers for business start-ups on the Total Entrepreneurial Activity in different countries. Regulatory barriers are determined using data from the “Doing Business” project of the World Bank. The role of cultural values and regulatory barriers in entrepreneurial activity is tested using data from 56 countries and Structural Equation Modeling. The paper shows that cultural values and regulatory barriers are not related to entrepreneurship in the same way in countries with differing levels of development. On the contrary, the strength and nature of the influence of both factors on entrepreneurial activity depends on a country’s per capita GDP. Furthermore, the impact of regulatory barriers on entrepreneurship is moderated by cultural values. Thus, the discouraging effect of the regulatory barriers on entrepreneurial activity is more important in those countries with a societal culture characterized by autonomy, egalitarianism and harmony values.  相似文献   

20.
How should an organization be designed in order to provide its members with minimal incentives to defect? And how does the optimal design depend on the type of strategic interaction between defectors and remaining organizational members? This paper addresses such issues in a game theoretic model of cooperation, in which an organization is formally represented by a connected network, and where gains from cooperation are given by a partition function. We show that critical structural features of the organization depend in a clear-cut way on the sign of spillovers. In particular, positive spillovers favor the adoption of dispersed and centralized forms, while negative spillovers favor cohesive and horizontal ones. Moreover, if the organizational form determines all the communication possibilities of members, a highly centralized organization—the star—emerges under positive spillovers, whereas two horizontal architectures—the circle and the complete—emerge under negative spillovers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号