首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
DEA (Data Envelopment Analysis) attempts to identify sources and estimate amounts of inefficiencies contained in the outputs and inputs generated by managed entities called DMUs (Decision Making Units). Explicit formulation of underlying functional relations with specified parametric forms relating inputs to outputs is not required. An overall (scalar) measure of efficiency is obtained for each DMU from the observed magnitudes of its multiple inputs and outputs without requiring use of a priori weights or relative value assumptions and, in addition, sources and amounts of inefficiency are estimated for each input and each output for every DMU. Earlier theory is extended so that DEA can deal with zero inputs and outputs and zero virtual multipliers (shadow prices). This is accomplished by partitioning DMUs into six classes via primal and dual representation theorems by means of which restrictions to positive observed values for all inputs and outputs are eliminated along with positivity conditions imposed on the variables which are usually accomplished by recourse to nonarchimedian concepts. Three of the six classes are scale inefficient and two of the three scale efficient classes are also technically (zero waste) efficient.The refereeing process of this paper was handled through R. Banker. This paper was prepared as part of the research supported by National Science Foundation grant SES-8722504 and by the IC2 Institute of The University of Texas and was initially submitted in May 1985.  相似文献   

2.
This paper aims at developing a new methodology to measure and decompose global DMU efficiency into efficiency of inputs (or outputs). The basic idea rests on the fact that global DMU's efficiency score might be misleading when managers proceed to reallocate their inputs or redefine their outputs. Literature provides a basic measure for global DMU's efficiency score. A revised model was developed for measuring efficiencies of global DMUs and their inputs (or outputs) efficiency components, based on a hypothesis of virtual DMUs. The present paper suggests a method for measuring global DMU efficiency simultaneously with its efficiencies of inputs components, that we call Input decomposition DEA model (ID-DEA), and its efficiencies of outputs components, that we call output decomposition DEA model (OD-DEA). These twin models differ from Supper efficiency model (SE-DEA) and Common Set Weights model (CSW-DEA). The twin models (ID-DEA, OD-DEA) were applied to agricultural farms, and the results gave different efficiency scores of inputs (or outputs), and at the same time, global DMU's efficiency score was given by the Charnes, Cooper and Rhodes (Charnes et al., 1978) [1], CCR78 model. The rationale of our new hypothesis and model is the fact that managers don't have the same information level about all inputs and outputs that constraint them to manage resources by the (global) efficiency scores. Then each input/output has a different reality depending on the manager's decision in relationship to information available at the time of decision. This paper decomposes global DMU's efficiency into input (or output) components' efficiencies. Each component will have its score instead of a global DMU score. These findings would improve management decision making about reallocating inputs and redefining outputs. Concerning policy implications of the DEA twin models, they help policy makers to assess, ameliorate and reorient their strategies and execute programs towards enhancing the best practices and minimising losses.  相似文献   

3.
Traditionally, data envelopment analysis (DEA) requires all decision-making units (DMUs) to have similar characteristics and experiences within the same external conditions. In many cases, this assumption fails to hold, and thus, difficulties will be encountered to some extent when measuring efficiency with a standard DEA model. Ideally, the performance of DMUs with different characteristics could be examined using the DEA meta-frontier framework. However, some of these DMUs are mixed-type DMUs that may affiliate with more than one group. Furthermore, the total number of observations of these mixed-type DMUs is limited. This is one of the common problems when studies focus on faculty research performance in higher education institutions. In general, a faculty member is affiliated with a certain department, and if the departmental assessment policy is not suitable for faculty members who are involved in interdisciplinary research, their performance could be underestimated. Therefore, the proposed model is an extension of the DEA meta-frontier framework that can assess the performance of mixed-type DMUs by constructing the reference set without the same type of DMUs. In this paper, the scientific research efficiency of faculty members at the Inner Mongolia University is used as an example to provide a better understanding of the proposed model. The proposed model is intended to provide a fair and balanced performance assessment method that reflects actual performance, especially for mixed-type DMUs.  相似文献   

4.
In this paper we propose a new technique for incorporating environmental effects and statistical noise into a producer performance evaluation based on data envelopment analysis (DEA). The technique involves a three-stage analysis. In the first stage, DEA is applied to outputs and inputs only, to obtain initial measures of producer performance. In the second stage, stochastic frontier analysis (SFA) is used to regress first stage performance measures against a set of environmental variables. This provides, for each input or output (depending on the orientation of the first stage DEA model), a three-way decomposition of the variation in performance into a part attributable to environmental effects, a part attributable to managerial inefficiency, and a part attributable to statistical noise. In the third stage, either inputs or outputs (again depending on the orientation of the first stage DEA model) are adjusted to account for the impact of the environmental effects and the statistical noise uncovered in the second stage, and DEA is used to re-evaluate producer performance. Throughout the analysis emphasis is placed on slacks, rather than on radial efficiency scores, as appropriate measures of producer performance. An application to nursing homes is provided to illustrate the power of the three-stage methodology.  相似文献   

5.
This paper presents stochasticmodels in data envelopment analysis (DEA) for the possibilityof variations in inputs and outputs. Efficiency measure of adecision making unit (DMU) is defined via joint probabilisticcomparisons of inputs and outputs with other DMUs and can becharacterized by solving a chance constrained programming problem.By utilizing the theory of chance constrained programming, deterministicequivalents are obtained for both situations of multivariatesymmetric random disturbances and a single random factor in theproduction relationships. The linear deterministic equivalentand its dual form are obtained via the goal programming theoryunder the assumption of the single random factor. An analysisof stochastic variable returns to scale is developed using theidea of stochastic supporting hyperplanes. The relationshipsof our stochastic DEA models with some conventional DEA modelsare also discussed.  相似文献   

6.
王中魁 《价值工程》2010,29(34):153-155
运用DEA模型对我国31个省区市轻工业的经营效率进行了实证研究和分析,结果表明,大部分省份轻工业的经营效率是非DEA有效的,尤其是沿海发达地区的轻工业大省。针对此现象进行了深入分析,并提出了相应的对策。  相似文献   

7.
Data Envelopment Analysis (DEA) is a methodology that computes efficiency values for decision making units (DMU) in a given period by comparing the outputs with the inputs. In many applications, inputs and outputs of DMUs are monitored over time. There might be a time lag between the consumption of inputs and the production of outputs. We develop an approach that aims to capture the time lag between the outputs and the inputs in assigning the efficiency values to DMUs. We propose using weight restrictions in conjunction with the model. Our computational results on randomly generated problems demonstrate that the developed approach works well under a large variety of experimental conditions. We also apply our approach on a real data set to evaluate research institutions.  相似文献   

8.
Data Envelopment Analysis (DEA) is a linear programming methodology for measuring the efficiency of Decision Making Units (DMUs) to improve organizational performance in the private and public sectors. However, if a new DMU needs to be known its efficiency score, the DEA analysis would have to be re-conducted, especially nowadays, datasets from many fields have been growing rapidly in the real world, which will need a huge amount of computation. Following the previous studies, this paper aims to establish a linkage between the DEA method and machine learning (ML) algorithms, and proposes an alternative way that combines DEA with ML (ML-DEA) algorithms to measure and predict the DEA efficiency of DMUs. Four ML-DEA algorithms are discussed, namely DEA-CCR model combined with back-propagation neural network (BPNN-DEA), with genetic algorithm (GA) integrated with back-propagation neural network (GANN-DEA), with support vector machines (SVM-DEA), and with improved support vector machines (ISVM-DEA), respectively. To illustrate the applicability of above models, the performance of Chinese manufacturing listed companies in 2016 is measured, predicted and compared with the DEA efficiency scores obtained by the DEA-CCR model. The empirical results show that the average accuracy of the predicted efficiency of DMUs is about 94%, and the comprehensive performance order of four ML-DEA algorithms ranked from good to poor is GANN-DEA, BPNN-DEA, ISVM-DEA, and SVM-DEA.  相似文献   

9.
Environmental issues are becoming more and more important in our everyday life. Data Envelopment Analysis (DEA) is a tool developed for measuring relative operational efficiency. DEA can also be employed to estimate environmental efficiency where undesirable outputs like greenhouse gases exist. The classical DEA method identifies best practices among a given empirical data set. In many situations, however, it is advantageous to determine the worst practices and perform efficiency evaluation by comparing DMUs with the full-inefficient frontier. This strategy requires that the conventional production possibility set is defined from a reverse perspective. In this paper, presence of both desirable and undesirable outputs is assumed and a methodological framework for performing an unbiased efficiency analysis is proposed. The reverse production possibility set is defined and new models are presented regarding the full-inefficient frontier. The operational, environmental and overall reverse efficiencies are studied. The important notion of weak disposability is discussed and the effects of this assumption on the proposed models are investigated. The capability of the proposed method is examined using data from a real-world application about paper production.  相似文献   

10.
In this paper we propose a target efficiency DEA model that allows for the inclusion of environmental variables in a one stage model while maintaining a high degree of discrimination power. The model estimates the impact of managerial and environmental factors on efficiency simultaneously. A decomposition of the overall technical efficiency into two components, target efficiency and environmental efficiency, is derived. Estimation of target efficiency scores requires the solution of a single large non-linear optimization problem and provides both a joint estimation of target efficiency scores from all DMUs and an estimation of a common scalar expressing the environmental impact on efficiency for each environmental factor. We argue that if the indices on environmental conditions are constructed as the percentage of output with certain attributes present, then it is reasonable to let all reference DMUs characterized by a composed fraction lower than the fraction of output possessing the attribute of the evaluated DMU enter as potential dominators. It is shown that this requirement transforms the cone-ratio constraints on intensity variables in the BM-model (Banker and Morey 1986) into endogenous handicap functions on outputs. Furthermore, a priori information or general agreements on allowable handicap values can be incorporated into the model along the same lines as specifications of assurance regions in standard DEA.
O. B. OlesenEmail:
  相似文献   

11.
Centralized Resource Allocation Using Data Envelopment Analysis   总被引:2,自引:0,他引:2  
While conventional DEA models set targets separately for each DMU, in this paper we consider that there is a centralized decision maker (DM) who “owns” or supervises all the operating units. In such intraorganizational scenario the DM has an interest in maximizing the efficiency of individual units at the same time that total input consumption is minimized or total output production is maximized. Two new DEA models are presented for such resource allocation. One type of model seeks radial reductions of the total consumption of every input while the other type seeks separate reductions for each input according to a preference structure. In both cases, total output production is guaranteed not to decrease. The two key features of the proposed models are their simplicity and the fact that both of them project all DMUs onto the efficient frontier. The dual formulation shows that optimizing total input consumption and output production is equivalent to finding weights that maximize the relative efficiency of a virtual DMU with average inputs and outputs. A graphical interpretation as well as numerical results of the proposed models are presented.  相似文献   

12.
Data envelopment analysis (DEA) is a non-parametric approach for measuring the relative efficiencies of peer decision making units (DMUs). In recent years, it has been widely used to evaluate two-stage systems under different organization mechanisms. This study modifies the conventional leader–follower DEA models for two-stage systems by considering the uncertainty of data. The dual deterministic linear models are first constructed from the stochastic CCR models under the assumption that all components of inputs, outputs, and intermediate products are related only with some basic stochastic factors, which follow continuous and symmetric distributions with nonnegative compact supports. The stochastic leader–follower DEA models are then developed for measuring the efficiencies of the two stages. The stochastic efficiency of the whole system can be uniquely decomposed into the product of the efficiencies of the two stages. Relationships between stochastic efficiencies from stochastic CCR and stochastic leader–follower DEA models are also discussed. An example of the commercial banks in China is considered using the proposed models under different risk levels.  相似文献   

13.
Data envelopment analysis (DEA) has been constantly used to measure the technical efficiency of decision-making units (DMUs). However, the major problem of traditional DEA methods is that they do not consider the possible intermediate effects. Recently, many papers have applied network DEA models to evaluate the efficiency scores. However, the linking activity of DMUs is still hard to be recognized. Hence, we employ DEMATEL to obtain the linking activity of DMUs. Our empirical research shows that the proposed method can soundly deal with the purpose of identifying the relationship between variables and derive the reasonable result in network DEA.  相似文献   

14.
Data Envelopment Analysis (DEA) has been widely studied in the literature since its inception in 1978. The methodology behind the classical DEA, the oriented method, is to hold inputs (outputs) constant and to determine how much of an improvement in the output (input) dimensions is necessary in order to become efficient. This paper extends this methodology in two substantive ways. First, a method is developed that determines the least-norm projection from an inefficient DMU to the efficient frontier in both the input and output space simultaneously, and second, introduces the notion of the observable frontier and its subsequent projection. The observable frontier is the portion of the frontier that has been experienced by other DMUs (or convex combinations of such) and thus, the projection onto this portion of the frontier guarantees a recommendation that has already been demonstrated by an existing DMU or a convex combination of existing DMUs. A numerical example is used to illustrate the importance of these two methodological extensions.  相似文献   

15.
Data envelopment analysis (DEA) assumes that inputs and outputs are measured on scales in which larger numerical values correspond to greater consumption of inputs and greater production of outputs. We present a class of DEA problems in which one or more of the inputs or outputs are naturally measured on scales in which higher numerical values represent lower input consumption or lower output production. We refer to such quantities as reverse inputs and reverse outputs. We propose to incorporate reverse inputs and outputs into a DEA model by returning to the basic principles that lead to the DEA model formulation. We compare our method to reverse scoring, the most commonly used approach, and demonstrate the relative advantages of our proposed technique. We use this concept to analyze all 30 Major League Baseball (MLB) organizations during the 1999 regular season to determine their on-field and front office relative efficiencies. Our on-field DEA model employs one output and two symmetrically defined inputs, one to measure offense and one to measure defense. The defensive measure is such that larger values correspond to worse defensive performance, rather than better, and hence is a reverse input. The front office model uses one input. Its outputs, one of which is a reverse output, are the inputs to the on-field model. We discuss the organizational implications of our results.  相似文献   

16.
This paper develops theory and algorithms for a “multiplicative” Data Envelopment Analysis (DEA) model employing virtual outputs and inputs as does the CCR ratio method for efficiency analysis. The frontier production function results here are of piecewise log-linear rather than piecewise linear form.  相似文献   

17.
The selection of inputs and outputs in DEA models represents a vibrant methodological topic. At the same time; however, the problem of the impact of different measurement units of selected inputs is understated in empirical literature. Using the example of Czech farms, we show that the DEA method does not provide consistent score estimates, neither a stable ranking for different popular measurements of labour and capital factors of production. For this reason, studies based on DEA efficiency results for differently measured inputs should be compared only with great caution.  相似文献   

18.
The proposed method of Stochastic Non-smooth Envelopment of Data (StoNED) for measuring efficiency has to date mainly found application in the analysis of production systems which have exactly one output. Therefore, the objective of this paper is to examine the applicability of StoNED when a ray production function models a production technology with multi-dimensional input and output. In addition to a general analysis of properties required by a ray production function for StoNED to be applicable, we conduct a Monte Carlo simulation in order to evaluate the quality of the frontier and efficiencies estimated by StoNED. The results are compared with those derived via Stochastic Frontier Analysis (SFA) and Data Envelopment Analysis (DEA). We show that StoNED provides competitive estimates in regard to other methods and especially in regard to the real functional form and efficiency.  相似文献   

19.
In some applications of data envelopment analysis (DEA) there may be doubt as to whether all the DMUs form a single group with a common efficiency distribution. The Mann–Whitney rank statistic has been used to evaluate if two groups of DMUs come from a common efficiency distribution under the assumption of them sharing a common frontier and to test if the two groups have a common frontier. These procedures have subsequently been extended using the Kruskal–Wallis rank statistic to consider more than two groups. This technical note identifies problems with the second of these applications of both the Mann–Whitney and Kruskal–Wallis rank statistics. It also considers possible alternative methods of testing if groups have a common frontier, and the difficulties of disaggregating managerial and programmatic efficiency within a non-parametric framework.   相似文献   

20.
Congestion is an economic phenomenon of overinvestment that occurs when excessive inputs decrease the maximally possible outputs. Although decision-makers are unlikely to decrease outputs by increasing inputs, congestion is widespread in reality. Identifying and measuring congestion can help decision-makers detect the problem of overinvestment. This paper reviews the development of the concept of congestion in the framework of data envelopment analysis (DEA), which is a widely accepted method for identifying and measuring congestion. In this paper, six main congestion identification and measurement methods are analysed through several numerical examples. We investigate the ideas of these methods, the contributions compared with the previous methods, and the existing shortcomings. Based on our analysis, we conclude that existing congestion identification and measurement methods are still inadequate. Three problems are anticipated for further study: maintaining the consistency between congestion and overinvestment, considering joint weak disposability assumption between desirable outputs and undesirable outputs, and quantifying the degree of congestion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号