首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
DEA (Data Envelopment Analysis) attempts to identify sources and estimate amounts of inefficiencies contained in the outputs and inputs generated by managed entities called DMUs (Decision Making Units). Explicit formulation of underlying functional relations with specified parametric forms relating inputs to outputs is not required. An overall (scalar) measure of efficiency is obtained for each DMU from the observed magnitudes of its multiple inputs and outputs without requiring use of a priori weights or relative value assumptions and, in addition, sources and amounts of inefficiency are estimated for each input and each output for every DMU. Earlier theory is extended so that DEA can deal with zero inputs and outputs and zero virtual multipliers (shadow prices). This is accomplished by partitioning DMUs into six classes via primal and dual representation theorems by means of which restrictions to positive observed values for all inputs and outputs are eliminated along with positivity conditions imposed on the variables which are usually accomplished by recourse to nonarchimedian concepts. Three of the six classes are scale inefficient and two of the three scale efficient classes are also technically (zero waste) efficient.The refereeing process of this paper was handled through R. Banker. This paper was prepared as part of the research supported by National Science Foundation grant SES-8722504 and by the IC2 Institute of The University of Texas and was initially submitted in May 1985.  相似文献   

2.
This paper aims at developing a new methodology to measure and decompose global DMU efficiency into efficiency of inputs (or outputs). The basic idea rests on the fact that global DMU's efficiency score might be misleading when managers proceed to reallocate their inputs or redefine their outputs. Literature provides a basic measure for global DMU's efficiency score. A revised model was developed for measuring efficiencies of global DMUs and their inputs (or outputs) efficiency components, based on a hypothesis of virtual DMUs. The present paper suggests a method for measuring global DMU efficiency simultaneously with its efficiencies of inputs components, that we call Input decomposition DEA model (ID-DEA), and its efficiencies of outputs components, that we call output decomposition DEA model (OD-DEA). These twin models differ from Supper efficiency model (SE-DEA) and Common Set Weights model (CSW-DEA). The twin models (ID-DEA, OD-DEA) were applied to agricultural farms, and the results gave different efficiency scores of inputs (or outputs), and at the same time, global DMU's efficiency score was given by the Charnes, Cooper and Rhodes (Charnes et al., 1978) [1], CCR78 model. The rationale of our new hypothesis and model is the fact that managers don't have the same information level about all inputs and outputs that constraint them to manage resources by the (global) efficiency scores. Then each input/output has a different reality depending on the manager's decision in relationship to information available at the time of decision. This paper decomposes global DMU's efficiency into input (or output) components' efficiencies. Each component will have its score instead of a global DMU score. These findings would improve management decision making about reallocating inputs and redefining outputs. Concerning policy implications of the DEA twin models, they help policy makers to assess, ameliorate and reorient their strategies and execute programs towards enhancing the best practices and minimising losses.  相似文献   

3.
Efficiency evaluation of a Decision Making Unit (DMU) involves two issues: 1) selection of an appropriate reference plan against which to evaluate the DMU and 2) measurement of performance slack. In the literature, these issues are mixed in one and the same operation but we argue that it has theoretical as well as practical advantages to separate them. We provide an axiomatic characterization of the implicit Farrell selection. This approach, ignores important aspects of the technology by focussing on proportional variations in inputs (or outputs). We propose a new approach where potential improvements are used to guide the selection of reference plans. A characterization of this approach is provided and an associated translation invariant, strictly monotonous and continuous efficiency index is suggested.  相似文献   

4.
Data envelopment analysis (DEA) assumes that inputs and outputs are measured on scales in which larger numerical values correspond to greater consumption of inputs and greater production of outputs. We present a class of DEA problems in which one or more of the inputs or outputs are naturally measured on scales in which higher numerical values represent lower input consumption or lower output production. We refer to such quantities as reverse inputs and reverse outputs. We propose to incorporate reverse inputs and outputs into a DEA model by returning to the basic principles that lead to the DEA model formulation. We compare our method to reverse scoring, the most commonly used approach, and demonstrate the relative advantages of our proposed technique. We use this concept to analyze all 30 Major League Baseball (MLB) organizations during the 1999 regular season to determine their on-field and front office relative efficiencies. Our on-field DEA model employs one output and two symmetrically defined inputs, one to measure offense and one to measure defense. The defensive measure is such that larger values correspond to worse defensive performance, rather than better, and hence is a reverse input. The front office model uses one input. Its outputs, one of which is a reverse output, are the inputs to the on-field model. We discuss the organizational implications of our results.  相似文献   

5.
This paper treats efficiency measurement when some outputs are undesirable and producers control pollutants by end-of-pipe or change-in-process abatement. A data envelopment analysis framework that compares producers with similar pollution control efforts is proposed. First, my approach avoids arbitrary disposability assumptions for undesirable outputs. Second, the model is used to evaluate the interplay between pollution control activities and technical efficiency. I compare my approach to the traditional neo-classical production model that does not incorporate undesirable outputs among outputs, and to Färe et al.’s (Rev Econ Stat 71:90–98, 1989, J Econom 126:469–492, 2005) well-known model that incorporates bads. I evaluate the common assumption in the literature on polluting technologies, that inputs are allocatable to pollution control, and apply U.S. electricity data to illustrate my main point: Although my empirical model specifications are in line with the literature on polluting technologies, they rely on inputs that play an insignificant role in controlling nitrogen oxides (NOx) emissions. Consequentially, there are no reasons to expect the efficiency scores of the traditional model to differ from the efficiency scores of the other two models that account for resources employed to pollution control. Statistical tests show that my model, which explicitly takes pollution control efforts into account, produces efficiency scores that are not statistically different from the traditional model’s scores for all model specifications, while Färe et al.’s model produces significantly different results for some model specifications. I conclude that the popular production models that incorporate undesirable outputs may not be applicable to all cases involving polluting production and that more emphasis on appropriate empirical specifications is needed.  相似文献   

6.
Dealing with weighted additive models in Data Envelopment Analysis guarantees that any projection of an inefficient unit belongs to the strong efficient frontier, among other interesting properties. Recently, constant returns to scale (CRS) range-bounded models have been introduced for defining a new additive-type efficiency measure (see Cooper et al. in J Prod Anal 35(2):85–94, 2011). This paper continues such earlier work further, considering a more general setting. In particular, we show that under free disposability of inputs and outputs, CRS bounded additive models require a double set of slacks. The second set of slacks allows us to properly characterize all the Pareto-efficient points associated to the bounded technology. We further introduce the CRS partially-bounded additive models.  相似文献   

7.
The purpose of this paper is to discuss the use of Value Efficiency Analysis (VEA) in efficiency evaluation when preference information is taken into account. Value efficiency analysis is an approach, which applies the ideas developed for Multiple Objective Linear Programming (MOLP) to Data Envelopment Analysis (DEA). Preference information is given through the desirable structure of input- and output-values. The same values can be used for all units under evaluation or the values can be specific for each unit. A decision-maker can specify the input- and output-values subjectively without any support or (s)he can use a multiple criteria support system to assist him/her to find those values on the efficient frontier. The underlying assumption is that the most preferred values maximize the decision-maker's implicitly known value function in a production possibility set or a subset. The purpose of value efficiency analysis is to estimate a need to increase outputs and/or decrease inputs for reaching the indifference contour of the value function at the optimum. In this paper, we briefly review the main ideas in value efficiency analysis and discuss practical aspects related to the use of value efficiency analysis. We also consider some extensions.  相似文献   

8.
Presumably, input-output coefficients reflect technology, and these coefficients measure the input requirements per unit of product. This concept has been extended to consumption theory, where it models expenditure shares. Input-output coefficients are extracted from the national accounts of an economy, by taking average proportions between inputs and outputs. Since the latter represent all sorts of inefficiencies, this practice blurs the measurement of technology. Input requirements are better measured by minimal proportions between inputs and outputs. This approach separates the measurement of technology from that of productive efficiency.  相似文献   

9.
Data Envelopment Analysis (DEA) applications frequently involve nonsubstitutable inputs and nonsubstitutable outputs (that is, fixed proportion technologies). However, DEA theory requires substitutability. In this paper, we illustrate the consequences of nonsubstitutability on DEA efficiency estimates, and we develop new efficiency indicators that are similar to those of conventional DEA models except that they require nonsubstitutability. Then, using simulated and real-world datasets that encompass fixed proportion technologies, we compare DEA efficiency estimates with those of the new indicators. The examples demonstrate that DEA efficiency estimates are biased when inputs and outputs are nonsubstitutable. The degree of bias varies considerably among Decision Making Units, resulting in substantial differences in efficiency rankings between DEA and the new measures. And, over 90% of the units that DEA identifies as efficient are, in truth, not efficient. We conclude that when inputs and outputs are not substituted for either technological or other reasons, conventional DEA models should be replaced with models that account for nonsubstitutability.  相似文献   

10.
A two-stage production process assumes that the first stage transforms external inputs to a number of intermediate measures, which then are used as inputs to the second stage that produces the final outputs. The fundamental approaches to two-stage network data envelopment analysis are the multiplicative and the additive efficiency-decomposition approaches. Both they assume a series relationship between the two stages but they differ in the definition of the overall system efficiency as well as in the way they conceptualize the decomposition of the overall efficiency to the efficiencies of the individual stages. In this paper, we first show that the efficiency estimates obtained by the additive decomposition method are biased, by unduly favouring one stage against the other, while those obtained by the multiplicative method are not unique. Then, we present a novel approach to estimate unique and unbiased efficiency scores for the individual stages, which are then composed to obtain the efficiency of the overall system, by selecting the aggregation method a posteriori. Within the particularity of two-stage processes emerging from the conflicting role of the intermediate measures, we develop an envelopment model to locate the efficient frontier whose derivation from our primal multiplier efficiency assessment model is effectively justified. The results derived from our approach are compared with those obtained by the aforementioned basic methods on experimental data as well as on test data drawn from the literature. Similarities and dissimilarities in the results are rigorously justified.  相似文献   

11.
The bank efficiency literature lacks an agreed definition of bank outputs and inputs. This is problematic given the long-standing controversy concerning the status of deposits, but also because bank efficiency estimates are known to be affected by the inclusion of additional outputs such as non-traditional (fee-based) activities or risk measures. This paper proposes a data-driven identification of bank outputs and inputs using the directional technology distance function. While previous applications of this tool used symmetric expansion or contraction directions, we focus on a set of orthogonal directions, each corresponding to an assumption on the input/output status of an individual variable. These directions correspond to a set of different specifications, whose estimated coefficients can be used to determine the input or output status of all variables except the regressand. Our empirical analysis revealed a very consistent pattern across the alternative specifications estimated. There is strong evidence that customer deposits are an input, and that non-performing loans are an important undesirable output. Finally, the orthogonal expansions/contractions we consider avoid the simultaneity problem raised by the “convenient normalization” commonly used to impose linear homogeneity in stochastic frontier estimation.  相似文献   

12.
State Transport Undertakings (STUs) are key players in providing mass road transport in India. Given that they operate under high levels of government imposed regulatory constraints, it is imperative to study their efficiency levels. Given that capital is a relatively scarce resource in developing countries like India, it is important to obtain efficiency in the short-run where some inputs are fixed as well as over the long run, where all inputs are variable. The technique used for capturing efficiency is Data Envelopment Analysis (DEA). A key possible limitation of DEA models based on physical inputs and outputs is that for an inefficient firm, reduction in some or all inputs may be recommended. It may often be desirable for an inefficient firm to increase some less expensive inputs while reducing the use of relatively expensive ones. Hence, when market price data is available, it is advisable to use the cost variant of DEA. Also, it is possible to determine variable cost efficiency in the short run when some inputs cannot be varied. Such inputs are referred to as “quasi-fixed” inputs. In this paper, we examine short and long term efficiencies of select bus companies in India known as State Transport Undertakings (STUs) over a period of 10 years. Fleet strength has been used as the quasi-fixed input. It is possible to ascertain, through a comparison of shadow price of the quasi-fixed input, vis-à-vis its market price, as to whether the quantity of this input is sub-optimally small or large. It is found that by adopting efficiency enhancing practices, STUs can cumulatively reduce their operating costs to the extent of 9123.35 million dollars. Also the tendency to minimize costs is found to be declining over time. In the short run some STUs are found to operate with a sub optimally low fleet size.  相似文献   

13.
An analysis of operations efficiency in large-scale distribution systems   总被引:1,自引:0,他引:1  
This research applies Data Envelopment Analysis (DEA) methodology to evaluate the efficiency of units within a large-scale network of petroleum distribution facilities in the USA. Multiple inputs and outputs are incorporated into a broad set of DEA models, yielding a comprehensive approach to evaluating supply chain efficiency. This study empirically separates three recognized, important and yet different causes of performance shortfalls which have been generally difficult for managers to identify. They are: (1) managerial effectiveness; (2) scale of operations and potential for a given market area (and efficiency of resource allocation given the scale); and (3) understanding the resource heterogeneity via programmatic differences in efficiency. Overall, the efficiency differences identified raised insightful questions regarding top management’s selection of the appropriate form and type of inputs and outputs, as well as questions regarding the DEA model form selected.  相似文献   

14.
There are two main methods for measuring the efficiency of decision-making units (DMUs): data envelopment analysis (DEA) and stochastic frontier analysis (SFA). Each of these methods has advantages and disadvantages. DEA is more popular in the literature due to its simplicity, as it does not require any pre-assumption and can be used for measuring the efficiency of DMUs with multiple inputs and multiple outputs, whereas SFA is a parametric approach that is applicable to multiple inputs and a single output. Since many applied studies feature multiple output variables, SFA cannot be used in such cases. In this research, a unique method to transform multiple outputs to a virtual single output is proposed. We are thus able to obtain efficiency scores from calculated virtual single output by the proposed method that are close (or even the same depending on targeted parameters at the expense of computation time and resources) to the efficiency scores obtained from multiple outputs of DEA. This will enable us to use SFA with a virtual single output. The proposed method is validated using a simulation study, and its usefulness is demonstrated with real application by using a hospital dataset from Turkey.  相似文献   

15.
This paper studies the use of DEA (data envelopment analysis) as a tool for possible use in evaluating and planning the economic performance of China's cities (28 in all) which play a critical role in the government's program of economic development. DEA promises advantages which include the absence of any need for the assignment of weights on an a priori basis (to reflect the supposed relative importance of various outputs or inputs) when evaluating technical efficiency. It is also unnecessary to explicitly specify underlying functions that are intended to prescribe the analytical form of the relations between inputs and outputs. Finally, as is illustrated in the paper, DEA can be used to identify sources, and estimate amounts of inefficiencies in each city's performance as well as to identify returns-to-scale possibilities in ways that seem well-suited to the mixture of centralized and decentralized planning and performance that China is currently trying to use.  相似文献   

16.
Data Envelopment Analysis (DEA) is a methodology that computes efficiency values for decision making units (DMU) in a given period by comparing the outputs with the inputs. In many applications, inputs and outputs of DMUs are monitored over time. There might be a time lag between the consumption of inputs and the production of outputs. We develop an approach that aims to capture the time lag between the outputs and the inputs in assigning the efficiency values to DMUs. We propose using weight restrictions in conjunction with the model. Our computational results on randomly generated problems demonstrate that the developed approach works well under a large variety of experimental conditions. We also apply our approach on a real data set to evaluate research institutions.  相似文献   

17.
Pareto-Koopmans efficiency in Data Envelopment Analysis (DEA) is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set. In contrast to other approaches which have used Chance Constrained Programming formulations in DEA, the emphasis here is on joint chance constraints. An assumption of arbitrary but known probability distributions leads to the P-Model of chance constrained programming. A necessary condition for a DMU to be stochastically efficient and a sufficient condition for a DMU to be non-stochastically efficient are provided. Deterministic equivalents using the zero order decision rules of chance constrained programming and multivariate normal distributions take the form of an extended version of the additive model of DEA. Contacts are also maintained with all of the other presently available deterministic DEA models in the form of easily identified extensions which can be used to formalize the treatment of efficiency when stochastic elements are present.  相似文献   

18.
This paper examines the consequences for social efficiency if the locally provided public input can be differentially allocated among residents. We derive the distributional efficiency condition, which is the distribution of public inputs that maximizes within-city gains from trade. Differential allocation also causes modifications to the standard (Samuelsonian) allocative efficiency condition. Additionally, we explore the consequences of differential allocation for the median voter model. Standard empirical voter models are seriously flawed because they fail to distinguish final public output production from either individual demand or the distribution of publicly provided inputs. Finally, we derive the club sharing efficiency condition.  相似文献   

19.
Banking technology is typically characterized by multiple inputs and multiple outputs that are associated with various attributes, such as different types of deposits, loans, number of accounts, classes of employees and location of branches. These quality differentials in inputs and outputs are mostly ignored in empirical studies. These omissions make the practical value of productivity studies in organizations like banks questionable because quality is a key component of performance. This paper proposes using hedonic aggregator functions (as a tool of aggregating inputs and outputs with quality attributes) within an input distance function framework and analyzes the impact of banking deregulation on efficiency and total factor productivity (TFP) change in the Indian banking industry using panel data for the period 1996–2005. Empirical results indicate that banks have improved their efficiency (from 61% in 1996 to 72% in 2005) during the post‐deregulation period, and the gain in efficiency of state‐owned banks has surpassed that of private banks. Improvement in capital base, as indicated by increased capital adequacy ratio, played an important role in ushering efficiency gain. The return to scale estimate suggests that state‐owned banks are operating far above their efficient scale and cost savings can be obtained by reducing their size of operations. Overall, TFP growth was above 3.5% annually. Both technical progress and technical efficiency change consistently played an important role in shaping TFP growth. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Centralized Resource Allocation Using Data Envelopment Analysis   总被引:2,自引:0,他引:2  
While conventional DEA models set targets separately for each DMU, in this paper we consider that there is a centralized decision maker (DM) who “owns” or supervises all the operating units. In such intraorganizational scenario the DM has an interest in maximizing the efficiency of individual units at the same time that total input consumption is minimized or total output production is maximized. Two new DEA models are presented for such resource allocation. One type of model seeks radial reductions of the total consumption of every input while the other type seeks separate reductions for each input according to a preference structure. In both cases, total output production is guaranteed not to decrease. The two key features of the proposed models are their simplicity and the fact that both of them project all DMUs onto the efficient frontier. The dual formulation shows that optimizing total input consumption and output production is equivalent to finding weights that maximize the relative efficiency of a virtual DMU with average inputs and outputs. A graphical interpretation as well as numerical results of the proposed models are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号