首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we examine the implications of imposing separability on the translog and three other flexible forms. Our results imply that the Berndt-Christensen ‘nonlinear’ test for weak separability tests not only for weak separability, but also imposes a restrictive structure on the macro and micro functions for all currently known ‘flexible’ functional forms. For example, testing for weak separability using the translog as an exact form is in fact equivalent to testing for a hybrid of strong (additive) separability and homothetic weak separability with Cobb-Douglas aggregator functions. Our results show that these ‘flexible’ functional forms are ‘separability-inflexible’. That is, they are not capable of providing a second-order approximation to an arbitrary weakly separable function in any neighbourhood of a given point.  相似文献   

2.
Statistical methodology for spatio‐temporal point processes is in its infancy. We consider second‐order analysis based on pair correlation functions and K‐functions for general inhomogeneous spatio‐temporal point processes and for inhomogeneous spatio‐temporal Cox processes. Assuming spatio‐temporal separability of the intensity function, we clarify different meanings of second‐order spatio‐temporal separability. One is second‐order spatio‐temporal independence and relates to log‐Gaussian Cox processes with an additive covariance structure of the underlying spatio‐temporal Gaussian process. Another concerns shot‐noise Cox processes with a separable spatio‐temporal covariance density. We propose diagnostic procedures for checking hypotheses of second‐order spatio‐temporal separability, which we apply on simulated and real data.  相似文献   

3.
Many decision problems involve more than one attribute. Separable multi-attribute utility functions are commonly used to model preferences in such situations. We consider the case in which one attribute can be identified as money. The price at which non-monetary attributes may be substituted by money, the relation of this price to a decision-maker's wealth, and the implications on attitudes toward risk are examined for additively and multiplicatively separable multi-attribute utility functions. In particular, it is shown that additive separability, price independent of wealth and monetary risk-aversion are mutually inconsistent.  相似文献   

4.
This paper surveys the conditions under which it is possible to represent a continuous preference ordering using utility functions. We start with a historical perspective on the notions of utility and preferences, continue by defining the mathematical concepts employed in this literature, and then list several key contributions to the topic of representability. These contributions concern both the preference orderings and the spaces where they are defined. For any continuous preference ordering, we show the need for separability and the sufficiency of connectedness and separability, or second countability, of the space where it is defined. We emphasize the need for separability by showing that in any nonseparable metric space, there are continuous preference orderings without utility representation. However, by reinforcing connectedness, we show that countably boundedness of the preference ordering is a necessary and sufficient condition for the existence of a (continuous) utility representation. Finally, we discuss the special case of strictly monotonic preferences.  相似文献   

5.
ABSTRACT

This study develops a multi-regional input–output (MRIO) laboratory for policy-relevant applications in China. The Chinese IELab features unique flexibility and advances the previous state of the art in terms of three novel aspects. First, it can generate regionally and sectorally very detailed MRIO tables based on users’ own research questions. Second, it covers the entire territorial economic boundary and has the longest and most up to date annual time series from 1978 to 2015. Third, it can be used to provide insight to a wide range of research and policy questions including social, economic, and environmental issues, thus significantly improving all applications that rely on input–output tables. These features are illustrated by generating a Beijing–Tianjin–Hebei multi-regional supply and use table for 2014 at city level and applying it to the case study of transferring Beijing’s non-capital functions according to The Beijing-Tianjin-Hebei Coordinated Development Strategy set by the Chinese government.  相似文献   

6.
In this paper we examine the four properties of technical change functions introduced by Sato [1980, 1981], paying particular attention to the composition property. We show by way of examples that two of the properties are independent, but that a third is a necessary consequence of two others. A theorem providing a necessary condition for the composition property is stated and proved. Finally, we identify the general continuous functional form of technical change functions for which the efficiency of factor i is independent of the quantities of all factors j i. Noting an unappealing characteristic of a member of this class of transformations, we propose an additional property for technical change functions and note that it is independent of the others.The refereeing process of this paper was handled through S.T. Hackman.  相似文献   

7.
We derive a primal Divisia technical change index based on the output distance function and further show the validity of this index from both economic and axiomatic points of view. In particular, we derive the primal Divisia technical change index by total differentiation of the output distance function with respect to a time trend. We then show that this index is dual to the Jorgenson and Griliches (1967) dual Divisia total factor productivity growth (TFPG) index when both the output and input markets are competitive; dual to the Diewert and Fox (2008) markup-adjusted revenue-share-based dual Divisia technical change index when market power is limited to output markets; dual to the Denny et al. (1981) and Fuss (1994) cost-elasticity-share-based dual Divisia TFPG index when market power is limited to output markets and constant returns to scale is present; and also dual to a markup-and-markdown-adjusted Divisia technical change index when market power is present in both output and input markets. Finally, we show that the primal Divisia technical change index satisfies the properties of identity, commensurability, monotonicity, and time reversal. It also satisfies the property of proportionality in the presence of path independence, which in turn requires separability between inputs and outputs and homogeneity of subaggregator functions.  相似文献   

8.
The concepts of isotropy/anisotropy and separability/non‐separability of a covariance function are strictly related. If a covariance function is separable, it cannot be isotropic or geometrically anisotropic, except for the Gaussian covariance function, which is the only model both separable and isotropic. In this paper, some interesting results concerning the Gaussian covariance model and its properties related to isotropy and separability are given, and moreover, some examples are provided. Finally, a discussion on asymmetric models, with Gaussian marginals, is furnished and the strictly positive definiteness condition is discussed.  相似文献   

9.
Distance functions are gaining relevance as alternative representations of production technologies, with growing numbers of empirical applications being made in the productivity and efficiency field. Distance functions were initially defined on the input or output production possibility sets by Shephard (1953, 1970) and extended to a graph representation of the technology by Färe, Grosskopf and Lovell (1985) through their graph hyperbolic distance function. Since then, different techniques such as non parametric-DEA and parametric-SFA have been used to calculate these distance functions. However, in the latter case we know of no study in which the restriction to input or output orientation has been relaxed. What we propose is to overcome such restrictiveness on dimensionality by defining and estimating a parametric hyperbolic distance function which simultaneously allows for the maximum equiproportionate expansion of outputs and reduction of inputs. In particular, we introduce a translog hyperbolic specification that complies with the conventional properties that the hyperbolic distance function satisfies. Finally, to illustrate its applicability in efficiency analysis we implement it using a data set of Spanish savings banks.  相似文献   

10.
In aggregation for data envelopment analysis (DEA), a jointly determined aggregate measure of output and input efficiency is desired that is consistent with the individual decision making unit measures. An impasse has been reached in the current state of the literature, however, where only separate measures of input and output efficiency have resulted from attempts to aggregate technical efficiency with the radial measure models commonly employed in DEA. The latter measures are “incomplete” in that they omit the non-zero input and output slacks, and thus fail to account for all inefficiencies that the model can identify. The Russell measure eliminates the latter deficiency but is difficult to solve in standard formulations. A new approach has become available, however, which utilizes a ratio measure in place of the standard formulations. Referred to as an enhanced Russell graph measure (ERM), the resulting model is in the form of a fractional program. Hence, it can be transformed into an ordinary linear programming structure that can generate an optimal solution for the corresponding ERM model. As shown in this paper, an aggregate ERM can then be formed with all the properties considered to be desirable in an aggregate measure—including jointly determined input and output efficiency measures that represent separate estimates of input and output efficiency. Much of this paper is concerned with technical efficiency in both individual and system-wide efficiency measures. Weighting systems are introduced that extend to efficiency-based measures of cost, revenue, and profit, as well as derivatives such as rates of return over cost. The penultimate section shows how the solution to one model also generates optimal solutions to models with other objectives that include rates of return over cost and total profit. This is accomplished in the form of efficiency-adjusted versions of these commonly used measures of performance.  相似文献   

11.
Data envelopment analysis (DEA) measures the efficiency of each decision making unit (DMU) by maximizing the ratio of virtual output to virtual input with the constraint that the ratio does not exceed one for each DMU. In the case that one output variable has a linear dependence (conic dependence, to be precise) with the other output variables, it can be hypothesized that the addition or deletion of such an output variable would not change the efficiency estimates. This is also the case for input variables. However, in the case that a certain set of input and output variables is linearly dependent, the effect of such a dependency on DEA is not clear. In this paper, we call such a dependency a cross redundancy and examine the effect of a cross redundancy on DEA. We prove that the addition or deletion of a cross-redundant variable does not affect the efficiency estimates yielded by the CCR or BCC models. Furthermore, we present a sensitivity analysis to examine the effect of an imperfect cross redundancy on DEA by using accounting data obtained from United States exchange-listed companies.  相似文献   

12.
CUMIN charts     
Classical control charts are very sensitive to deviations from normality. In this respect, nonparametric charts form an attractive alternative. However, these often require considerably more Phase I observations than are available in practice. This latter problem can be solved by introducing grouping during Phase II. Then each group minimum is compared to a suitable upper limit (in the two-sided case also each group maximum to a lower limit). In the present paper it is demonstrated that such MIN charts allow further improvement by adopting a sequential approach. Once a new observation fails to exceed the upper limit, its group is aborted and a new one starts right away. The resulting CUMIN chart is easy to understand and implement. Moreover, this chart is truly nonparametric and has good detection properties. For example, like the CUSUM chart, it is markedly better than a Shewhart X-chart, unless the shift is really large.  相似文献   

13.
We consider generalized production functions, introduced in Zellner and Revankar (1969), for output y=g(f) where g is a monotonic function and f is a homogeneous production function. For various choices of the scale elasticity or returns to scale as a function of output, differential equations are solved to determine the associated forms of the monotonic transformation, g(f). Then by choice of the form of f, the elasticity of substitution, constant or variable, is determined. In this way, we have produced and generalized a number of homothetic production functions, some already in the literature. Also, we have derived and studied their associated cost functions to determine how their shapes are affected by various choices of the scale elasticity and substitution elasticity functions. In general, we require that the returns to scale function be a monotonically decreasing function of output and that associated average cost functions be U- or L-shaped with a unique minimum. We also represent production functions in polar coordinates and show how this representation simplifies study of production functions' properties. Using data for the US transportation equipment industry, maximum likelihood and Bayesian methods are employed to estimate many different generalized production functions and their associated average cost functions. In accord with results in the literature, it is found that the scale elasticities decline with output and that average cost curves are U- or L-shaped with unique minima. © 1998 John Wiley & Sons, Ltd.  相似文献   

14.
Summary We consider in this paper the transient behaviour of the queuing system in which (i) the input, following a Poisson distribution, is in batches of variable numbers; (ii) queue discipline is ‘first come first served’, it being assumed that the batches are pre-ordered for service purposes; and (iii) service time distribution is hyper-exponential withn branches. The Laplace transform of the system size distribution is determined by applying the method of generating functions, introduced in queuing theory byBailey [1]. However, assuming steady state conditions to obtain, the problem is completely solved and it is shown that by suitably defining the traffic intensity factor,ϱ, the value,p 0, of the probability of no delay, remains the same in this case of batch arrivals also as in the case of single arrivals. The Laplace transform of the waiting time distribution is also calculated in steady state case from which the mean waiting time may be calculated. Some of the known results are derived as particular cases.  相似文献   

15.
A new framework for testing for the existence of a consistent aggregator for a subset of inputs in a production function is developed in terms of the variable profit function. In contrast to the Berndt–Christensen framework, in which the parametric restrictions required to attain weak separability also impose unwanted restrictions on the form of the aggregator, the aggregator function has a flexible functional form. Consequently this procedure should permit a less restrictive test of separability or aggregation. Application of the procedure to the data for U.S. manufacturing assuming a production function involving two labour inputs (blue- and white-collar workers) and two capital inputs (structures and equipment) leads to the conclusion that there does not exist a consistent aggregator for labor whereas there is some mild support for the existence of a consistent aggregator for capital.  相似文献   

16.
An effectivity function assigns to each coalition of individuals in a society a family of subsets of alternatives such that the coalition can force the outcome of society’s choice to be a member of each of the subsets separately. A representation of an effectivity function is a game form with the same power structure as that specified by the effectivity function. In the present paper we investigate the continuity properties of the outcome functions of such representation. It is shown that while it is not in general possible to find continuous representations, there are important subfamilies of effectivity functions for which continuous representations exist. Moreover, it is found that in the study of continuous representations one may practically restrict attention to effectivity functions on the Cantor set. Here it is found that general effectivity functions have representations with lower or upper semicontinuous outcome function.  相似文献   

17.
This paper presents and estimates an input–output model in which input coefficient changes are functions of changing prices. The model produces results that mirror the characteristics of input demand functions based on the model of cost minimization subject to producing a desired level of output. It does not rely on the specification of a functional form for input coefficients, and it does not require the use of assumptions regarding the elasticity of substitution. Instead, it allows the actual price and coefficient changes that occur between periods to identify the implicit elasticities and own- and cross-price derivatives. Using this model, it is shown how accurate measures of price effects, including the full array of own and cross-elasticities of demand, can be estimated for models comprising up to 15 sectors given data for only two time periods.  相似文献   

18.
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose an integrated conditional moment type predictive accuracy test that is similar in spirit to that developed by Bierens (J. Econometr. 20 (1982) 105; Econometrica 58 (1990) 1443) and Bierens and Ploberger (Econometrica 65 (1997) 1129). The test is consistent against generic nonlinear alternatives, and is designed for comparing nested models. One important feature of our approach is that the same loss function is used for in-sample estimation and out-of-sample prediction. In this way, we rule out the possibility that the null model can outperform the nesting generic alternative model. It turns out that the limiting distribution of the ICM type test statistic that we propose is a functional of a Gaussian process with a covariance kernel that reflects both the time series structure of the data as well as the contribution of parameter estimation error. As a consequence, critical values that are data dependent and cannot be directly tabulated. One approach in this case is to obtain critical value upper bounds using the approach of Bierens and Ploberger (Econometrica 65 (1997) 1129). Here, we establish the validity of a conditional p-value method for constructing critical values. The method is similar in spirit to that proposed by Hansen (Econometrica 64 (1996) 413) and Inoue (Econometric Theory 17 (2001) 156), although we additionally account for parameter estimation error. In a series of Monte Carlo experiments, the finite sample properties of three variants of the predictive accuracy test are examined. Our findings suggest that all three variants of the test have good finite sample properties when quadratic loss is specified, even for samples as small as 600 observations. However, non-quadratic loss functions such as linex loss require larger sample sizes (of 1000 observations or more) in order to ensure reasonable finite sample performance.  相似文献   

19.
We characterize a rule in minimum cost spanning tree problems using an additivity property and some basic properties. If the set of possible agents has at least three agents, these basic properties are symmetry and separability. If the set of possible agents has two agents, we must add positivity.  相似文献   

20.
In the early 1980’s Kopp and Diewert proposed a popular method to decompose cost efficiency into allocative and technical efficiency for parametric functional forms based on the radial approach initiated by Farrell. We show that, relying on recently proposed homogeneity and duality results, their approach is unnecessary for self-dual homothetic production functions, while it is inconsistent in the non-homothetic case. By stressing that for homothetic technologies the radial distance function can be correctly interpreted as a technical efficiency measure, since allocative efficiency is independent of the output level and radial input reductions leave it unchanged, we contend that for non-homothetic technologies this is not the case because optimal input demands depend on the output targeted by the firm, as does the inequality between marginal rates of substitution and market prices—allocative inefficiency. We demonstrate that a correct definition of technical efficiency corresponds to the directional distance function because its flexibility ensures that allocative efficiency is kept unchanged through movements in the input production possibility set when solving technical inefficiency, and therefore the associated cost reductions can be solely—and rightly—ascribed to technical-engineering-improvements. The new methodology allowing for a consistent decomposition of cost inefficiency is illustrated resorting to simple examples of non-homothetic production functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号